00:00:00.001 Started by upstream project "autotest-nightly" build number 3701 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3082 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.215 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.215 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.124 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.136 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.147 Checking out Revision f620ee97e10840540f53609861ee9b86caa3c192 (FETCH_HEAD) 00:00:05.147 > git config core.sparsecheckout # timeout=10 00:00:05.157 > git read-tree -mu HEAD # timeout=10 00:00:05.174 > git checkout -f f620ee97e10840540f53609861ee9b86caa3c192 # timeout=5 00:00:05.194 Commit message: "change IP of vertiv1 PDU" 00:00:05.195 > git rev-list --no-walk f620ee97e10840540f53609861ee9b86caa3c192 # timeout=10 00:00:05.312 [Pipeline] Start of Pipeline 00:00:05.330 [Pipeline] library 00:00:05.331 Loading library shm_lib@master 00:00:05.332 Library shm_lib@master is cached. Copying from home. 00:00:05.349 [Pipeline] node 00:00:05.357 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.358 [Pipeline] { 00:00:05.369 [Pipeline] catchError 00:00:05.370 [Pipeline] { 00:00:05.381 [Pipeline] wrap 00:00:05.388 [Pipeline] { 00:00:05.393 [Pipeline] stage 00:00:05.395 [Pipeline] { (Prologue) 00:00:05.550 [Pipeline] sh 00:00:05.835 + logger -p user.info -t JENKINS-CI 00:00:05.853 [Pipeline] echo 00:00:05.854 Node: CYP12 00:00:05.859 [Pipeline] sh 00:00:06.152 [Pipeline] setCustomBuildProperty 00:00:06.162 [Pipeline] echo 00:00:06.163 Cleanup processes 00:00:06.167 [Pipeline] sh 00:00:06.451 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.462 2687204 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.499 [Pipeline] sh 00:00:06.779 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.779 ++ grep -v 'sudo pgrep' 00:00:06.779 ++ awk '{print $1}' 00:00:06.779 + sudo kill -9 00:00:06.779 + true 00:00:06.793 [Pipeline] cleanWs 00:00:06.826 [WS-CLEANUP] Deleting project workspace... 00:00:06.826 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.834 [WS-CLEANUP] done 00:00:06.838 [Pipeline] setCustomBuildProperty 00:00:06.850 [Pipeline] sh 00:00:07.136 + sudo git config --global --replace-all safe.directory '*' 00:00:07.199 [Pipeline] nodesByLabel 00:00:07.200 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.210 [Pipeline] httpRequest 00:00:07.215 HttpMethod: GET 00:00:07.216 URL: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:07.219 Sending request to url: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:07.241 Response Code: HTTP/1.1 200 OK 00:00:07.241 Success: Status code 200 is in the accepted range: 200,404 00:00:07.242 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:11.150 [Pipeline] sh 00:00:11.437 + tar --no-same-owner -xf jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:11.457 [Pipeline] httpRequest 00:00:11.461 HttpMethod: GET 00:00:11.462 URL: http://10.211.164.101/packages/spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:11.462 Sending request to url: http://10.211.164.101/packages/spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:11.475 Response Code: HTTP/1.1 200 OK 00:00:11.476 Success: Status code 200 is in the accepted range: 200,404 00:00:11.476 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:53.312 [Pipeline] sh 00:00:53.599 + tar --no-same-owner -xf spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:56.160 [Pipeline] sh 00:00:56.450 + git -C spdk log --oneline -n5 00:00:56.450 b084cba07 lib/blob: fixed potential expression overflow 00:00:56.450 ccad22cf9 test: split interrupt_common.sh 00:00:56.450 d4e4841d1 nvmf/vfio-user: improve mapping failure message 00:00:56.450 3e787bba6 nvmf: initialize sgroup->queued when poll group is created 00:00:56.450 b269b0edc doc: add lvol/blob shallow copy descriptions 00:00:56.464 [Pipeline] } 00:00:56.482 [Pipeline] // stage 00:00:56.490 [Pipeline] stage 00:00:56.492 [Pipeline] { (Prepare) 00:00:56.508 [Pipeline] writeFile 00:00:56.524 [Pipeline] sh 00:00:56.809 + logger -p user.info -t JENKINS-CI 00:00:56.820 [Pipeline] sh 00:00:57.100 + logger -p user.info -t JENKINS-CI 00:00:57.112 [Pipeline] sh 00:00:57.396 + cat autorun-spdk.conf 00:00:57.396 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.396 SPDK_TEST_NVMF=1 00:00:57.396 SPDK_TEST_NVME_CLI=1 00:00:57.396 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.396 SPDK_TEST_NVMF_NICS=e810 00:00:57.396 SPDK_RUN_UBSAN=1 00:00:57.396 NET_TYPE=phy 00:00:57.405 RUN_NIGHTLY=1 00:00:57.408 [Pipeline] readFile 00:00:57.427 [Pipeline] withEnv 00:00:57.429 [Pipeline] { 00:00:57.440 [Pipeline] sh 00:00:57.722 + set -ex 00:00:57.722 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:57.722 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.722 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.722 ++ SPDK_TEST_NVMF=1 00:00:57.722 ++ SPDK_TEST_NVME_CLI=1 00:00:57.722 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.722 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.722 ++ SPDK_RUN_UBSAN=1 00:00:57.722 ++ NET_TYPE=phy 00:00:57.722 ++ RUN_NIGHTLY=1 00:00:57.722 + case $SPDK_TEST_NVMF_NICS in 00:00:57.722 + DRIVERS=ice 00:00:57.722 + [[ tcp == \r\d\m\a ]] 00:00:57.722 + [[ -n ice ]] 00:00:57.722 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:57.722 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:57.722 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:57.722 rmmod: ERROR: Module irdma is not currently loaded 00:00:57.722 rmmod: ERROR: Module i40iw is not currently loaded 00:00:57.722 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:57.722 + true 00:00:57.722 + for D in $DRIVERS 00:00:57.722 + sudo modprobe ice 00:00:57.722 + exit 0 00:00:57.732 [Pipeline] } 00:00:57.750 [Pipeline] // withEnv 00:00:57.756 [Pipeline] } 00:00:57.772 [Pipeline] // stage 00:00:57.781 [Pipeline] catchError 00:00:57.783 [Pipeline] { 00:00:57.797 [Pipeline] timeout 00:00:57.797 Timeout set to expire in 40 min 00:00:57.799 [Pipeline] { 00:00:57.812 [Pipeline] stage 00:00:57.814 [Pipeline] { (Tests) 00:00:57.827 [Pipeline] sh 00:00:58.118 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.118 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.118 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.118 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.118 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.118 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.118 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.118 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.118 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.118 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.118 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.118 + source /etc/os-release 00:00:58.118 ++ NAME='Fedora Linux' 00:00:58.118 ++ VERSION='38 (Cloud Edition)' 00:00:58.118 ++ ID=fedora 00:00:58.118 ++ VERSION_ID=38 00:00:58.118 ++ VERSION_CODENAME= 00:00:58.118 ++ PLATFORM_ID=platform:f38 00:00:58.118 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:58.118 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.118 ++ LOGO=fedora-logo-icon 00:00:58.118 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:58.118 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.118 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:58.118 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.118 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.118 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.118 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:58.118 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.118 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:58.118 ++ SUPPORT_END=2024-05-14 00:00:58.118 ++ VARIANT='Cloud Edition' 00:00:58.118 ++ VARIANT_ID=cloud 00:00:58.118 + uname -a 00:00:58.118 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:58.118 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.420 Hugepages 00:01:01.420 node hugesize free / total 00:01:01.420 node0 1048576kB 0 / 0 00:01:01.420 node0 2048kB 0 / 0 00:01:01.420 node1 1048576kB 0 / 0 00:01:01.420 node1 2048kB 0 / 0 00:01:01.420 00:01:01.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.420 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:01.420 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:01.681 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:01.681 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:01.681 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:01.681 + rm -f /tmp/spdk-ld-path 00:01:01.681 + source autorun-spdk.conf 00:01:01.681 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.681 ++ SPDK_TEST_NVMF=1 00:01:01.681 ++ SPDK_TEST_NVME_CLI=1 00:01:01.681 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.681 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.681 ++ SPDK_RUN_UBSAN=1 00:01:01.681 ++ NET_TYPE=phy 00:01:01.681 ++ RUN_NIGHTLY=1 00:01:01.681 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.681 + [[ -n '' ]] 00:01:01.681 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.681 + for M in /var/spdk/build-*-manifest.txt 00:01:01.681 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.681 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.681 + for M in /var/spdk/build-*-manifest.txt 00:01:01.681 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.681 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.681 ++ uname 00:01:01.681 + [[ Linux == \L\i\n\u\x ]] 00:01:01.681 + sudo dmesg -T 00:01:01.681 + sudo dmesg --clear 00:01:01.681 + dmesg_pid=2688289 00:01:01.681 + [[ Fedora Linux == FreeBSD ]] 00:01:01.681 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.681 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.681 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.681 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.681 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.681 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.681 + sudo dmesg -Tw 00:01:01.681 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.681 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.681 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.681 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.681 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.681 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.681 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.681 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.681 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.681 Test configuration: 00:01:01.681 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.681 SPDK_TEST_NVMF=1 00:01:01.681 SPDK_TEST_NVME_CLI=1 00:01:01.681 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.681 SPDK_TEST_NVMF_NICS=e810 00:01:01.681 SPDK_RUN_UBSAN=1 00:01:01.681 NET_TYPE=phy 00:01:01.943 RUN_NIGHTLY=1 20:14:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.943 20:14:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.943 20:14:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.943 20:14:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.943 20:14:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.943 20:14:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.943 20:14:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.943 20:14:17 -- paths/export.sh@5 -- $ export PATH 00:01:01.943 20:14:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.943 20:14:17 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.943 20:14:17 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:01.943 20:14:17 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715624057.XXXXXX 00:01:01.943 20:14:17 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715624057.3fhvRi 00:01:01.943 20:14:17 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:01.943 20:14:17 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:01.943 20:14:17 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.943 20:14:17 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.943 20:14:17 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.943 20:14:17 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:01.943 20:14:17 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:01.943 20:14:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.943 20:14:17 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:01.943 20:14:17 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:01.943 20:14:17 -- pm/common@17 -- $ local monitor 00:01:01.943 20:14:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.943 20:14:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.943 20:14:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.943 20:14:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.943 20:14:17 -- pm/common@21 -- $ date +%s 00:01:01.943 20:14:17 -- pm/common@25 -- $ sleep 1 00:01:01.943 20:14:17 -- pm/common@21 -- $ date +%s 00:01:01.943 20:14:17 -- pm/common@21 -- $ date +%s 00:01:01.943 20:14:17 -- pm/common@21 -- $ date +%s 00:01:01.943 20:14:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715624057 00:01:01.943 20:14:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715624057 00:01:01.943 20:14:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715624057 00:01:01.943 20:14:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715624057 00:01:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715624057_collect-vmstat.pm.log 00:01:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715624057_collect-cpu-load.pm.log 00:01:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715624057_collect-cpu-temp.pm.log 00:01:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715624057_collect-bmc-pm.bmc.pm.log 00:01:02.919 20:14:18 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:02.919 20:14:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.919 20:14:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.919 20:14:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.919 20:14:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.919 Mon May 13 06:14:18 PM UTC 2024 00:01:02.919 20:14:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.919 v24.05-pre-599-gb084cba07 00:01:02.919 20:14:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.919 20:14:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.919 20:14:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.919 20:14:18 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:02.919 20:14:18 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:02.919 20:14:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.919 ************************************ 00:01:02.919 START TEST ubsan 00:01:02.919 ************************************ 00:01:02.919 20:14:18 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:02.919 using ubsan 00:01:02.919 00:01:02.919 real 0m0.001s 00:01:02.919 user 0m0.000s 00:01:02.919 sys 0m0.000s 00:01:02.919 20:14:18 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:02.919 20:14:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.919 ************************************ 00:01:02.919 END TEST ubsan 00:01:02.919 ************************************ 00:01:02.919 20:14:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.919 20:14:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.919 20:14:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.919 20:14:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.919 20:14:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.919 20:14:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.919 20:14:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.919 20:14:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.919 20:14:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:03.180 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.180 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.441 Using 'verbs' RDMA provider 00:01:19.305 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:31.706 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:31.706 Creating mk/config.mk...done. 00:01:31.706 Creating mk/cc.flags.mk...done. 00:01:31.706 Type 'make' to build. 00:01:31.706 20:14:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:31.706 20:14:46 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:31.706 20:14:46 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:31.706 20:14:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.706 ************************************ 00:01:31.706 START TEST make 00:01:31.706 ************************************ 00:01:31.706 20:14:46 make -- common/autotest_common.sh@1121 -- $ make -j144 00:01:31.706 make[1]: Nothing to be done for 'all'. 00:01:39.852 The Meson build system 00:01:39.852 Version: 1.3.1 00:01:39.852 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.852 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.852 Build type: native build 00:01:39.852 Program cat found: YES (/usr/bin/cat) 00:01:39.852 Project name: DPDK 00:01:39.852 Project version: 23.11.0 00:01:39.852 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.852 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.852 Host machine cpu family: x86_64 00:01:39.852 Host machine cpu: x86_64 00:01:39.852 Message: ## Building in Developer Mode ## 00:01:39.852 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.852 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.852 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.852 Program python3 found: YES (/usr/bin/python3) 00:01:39.852 Program cat found: YES (/usr/bin/cat) 00:01:39.852 Compiler for C supports arguments -march=native: YES 00:01:39.852 Checking for size of "void *" : 8 00:01:39.852 Checking for size of "void *" : 8 (cached) 00:01:39.852 Library m found: YES 00:01:39.852 Library numa found: YES 00:01:39.852 Has header "numaif.h" : YES 00:01:39.852 Library fdt found: NO 00:01:39.852 Library execinfo found: NO 00:01:39.852 Has header "execinfo.h" : YES 00:01:39.852 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.852 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.853 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.853 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.853 Run-time dependency openssl found: YES 3.0.9 00:01:39.853 Run-time dependency libpcap found: YES 1.10.4 00:01:39.853 Has header "pcap.h" with dependency libpcap: YES 00:01:39.853 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.853 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.853 Compiler for C supports arguments -Wformat: YES 00:01:39.853 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.853 Compiler for C supports arguments -Wformat-security: NO 00:01:39.853 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.853 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.853 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.853 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.853 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.853 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.853 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.853 Compiler for C supports arguments -Wundef: YES 00:01:39.853 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.853 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.853 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.853 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.853 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.853 Program objdump found: YES (/usr/bin/objdump) 00:01:39.853 Compiler for C supports arguments -mavx512f: YES 00:01:39.853 Checking if "AVX512 checking" compiles: YES 00:01:39.853 Fetching value of define "__SSE4_2__" : 1 00:01:39.853 Fetching value of define "__AES__" : 1 00:01:39.853 Fetching value of define "__AVX__" : 1 00:01:39.853 Fetching value of define "__AVX2__" : 1 00:01:39.853 Fetching value of define "__AVX512BW__" : 1 00:01:39.853 Fetching value of define "__AVX512CD__" : 1 00:01:39.853 Fetching value of define "__AVX512DQ__" : 1 00:01:39.853 Fetching value of define "__AVX512F__" : 1 00:01:39.853 Fetching value of define "__AVX512VL__" : 1 00:01:39.853 Fetching value of define "__PCLMUL__" : 1 00:01:39.853 Fetching value of define "__RDRND__" : 1 00:01:39.853 Fetching value of define "__RDSEED__" : 1 00:01:39.853 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:39.853 Fetching value of define "__znver1__" : (undefined) 00:01:39.853 Fetching value of define "__znver2__" : (undefined) 00:01:39.853 Fetching value of define "__znver3__" : (undefined) 00:01:39.853 Fetching value of define "__znver4__" : (undefined) 00:01:39.853 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.853 Message: lib/log: Defining dependency "log" 00:01:39.853 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.853 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.853 Checking for function "getentropy" : NO 00:01:39.853 Message: lib/eal: Defining dependency "eal" 00:01:39.853 Message: lib/ring: Defining dependency "ring" 00:01:39.853 Message: lib/rcu: Defining dependency "rcu" 00:01:39.853 Message: lib/mempool: Defining dependency "mempool" 00:01:39.853 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.853 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.853 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.853 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.853 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.853 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.853 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:39.853 Compiler for C supports arguments -mpclmul: YES 00:01:39.853 Compiler for C supports arguments -maes: YES 00:01:39.853 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.853 Compiler for C supports arguments -mavx512bw: YES 00:01:39.853 Compiler for C supports arguments -mavx512dq: YES 00:01:39.853 Compiler for C supports arguments -mavx512vl: YES 00:01:39.853 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.853 Compiler for C supports arguments -mavx2: YES 00:01:39.853 Compiler for C supports arguments -mavx: YES 00:01:39.853 Message: lib/net: Defining dependency "net" 00:01:39.853 Message: lib/meter: Defining dependency "meter" 00:01:39.853 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.853 Message: lib/pci: Defining dependency "pci" 00:01:39.853 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.853 Message: lib/hash: Defining dependency "hash" 00:01:39.853 Message: lib/timer: Defining dependency "timer" 00:01:39.853 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.853 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.853 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.853 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.853 Message: lib/power: Defining dependency "power" 00:01:39.853 Message: lib/reorder: Defining dependency "reorder" 00:01:39.853 Message: lib/security: Defining dependency "security" 00:01:39.853 Has header "linux/userfaultfd.h" : YES 00:01:39.853 Has header "linux/vduse.h" : YES 00:01:39.853 Message: lib/vhost: Defining dependency "vhost" 00:01:39.853 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.853 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.853 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.853 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.853 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.853 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.853 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.853 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.853 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.853 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.853 Program doxygen found: YES (/usr/bin/doxygen) 00:01:39.853 Configuring doxy-api-html.conf using configuration 00:01:39.853 Configuring doxy-api-man.conf using configuration 00:01:39.853 Program mandb found: YES (/usr/bin/mandb) 00:01:39.853 Program sphinx-build found: NO 00:01:39.853 Configuring rte_build_config.h using configuration 00:01:39.853 Message: 00:01:39.853 ================= 00:01:39.853 Applications Enabled 00:01:39.853 ================= 00:01:39.853 00:01:39.853 apps: 00:01:39.853 00:01:39.853 00:01:39.853 Message: 00:01:39.853 ================= 00:01:39.853 Libraries Enabled 00:01:39.853 ================= 00:01:39.853 00:01:39.853 libs: 00:01:39.853 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.853 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.853 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.853 00:01:39.853 Message: 00:01:39.853 =============== 00:01:39.853 Drivers Enabled 00:01:39.853 =============== 00:01:39.853 00:01:39.853 common: 00:01:39.853 00:01:39.853 bus: 00:01:39.853 pci, vdev, 00:01:39.853 mempool: 00:01:39.853 ring, 00:01:39.853 dma: 00:01:39.853 00:01:39.853 net: 00:01:39.853 00:01:39.853 crypto: 00:01:39.853 00:01:39.853 compress: 00:01:39.853 00:01:39.853 vdpa: 00:01:39.853 00:01:39.853 00:01:39.853 Message: 00:01:39.853 ================= 00:01:39.853 Content Skipped 00:01:39.853 ================= 00:01:39.853 00:01:39.853 apps: 00:01:39.853 dumpcap: explicitly disabled via build config 00:01:39.853 graph: explicitly disabled via build config 00:01:39.853 pdump: explicitly disabled via build config 00:01:39.853 proc-info: explicitly disabled via build config 00:01:39.853 test-acl: explicitly disabled via build config 00:01:39.853 test-bbdev: explicitly disabled via build config 00:01:39.853 test-cmdline: explicitly disabled via build config 00:01:39.853 test-compress-perf: explicitly disabled via build config 00:01:39.853 test-crypto-perf: explicitly disabled via build config 00:01:39.853 test-dma-perf: explicitly disabled via build config 00:01:39.853 test-eventdev: explicitly disabled via build config 00:01:39.853 test-fib: explicitly disabled via build config 00:01:39.853 test-flow-perf: explicitly disabled via build config 00:01:39.853 test-gpudev: explicitly disabled via build config 00:01:39.853 test-mldev: explicitly disabled via build config 00:01:39.853 test-pipeline: explicitly disabled via build config 00:01:39.853 test-pmd: explicitly disabled via build config 00:01:39.853 test-regex: explicitly disabled via build config 00:01:39.853 test-sad: explicitly disabled via build config 00:01:39.853 test-security-perf: explicitly disabled via build config 00:01:39.853 00:01:39.853 libs: 00:01:39.853 metrics: explicitly disabled via build config 00:01:39.853 acl: explicitly disabled via build config 00:01:39.853 bbdev: explicitly disabled via build config 00:01:39.853 bitratestats: explicitly disabled via build config 00:01:39.853 bpf: explicitly disabled via build config 00:01:39.853 cfgfile: explicitly disabled via build config 00:01:39.853 distributor: explicitly disabled via build config 00:01:39.853 efd: explicitly disabled via build config 00:01:39.853 eventdev: explicitly disabled via build config 00:01:39.853 dispatcher: explicitly disabled via build config 00:01:39.853 gpudev: explicitly disabled via build config 00:01:39.853 gro: explicitly disabled via build config 00:01:39.853 gso: explicitly disabled via build config 00:01:39.853 ip_frag: explicitly disabled via build config 00:01:39.853 jobstats: explicitly disabled via build config 00:01:39.853 latencystats: explicitly disabled via build config 00:01:39.853 lpm: explicitly disabled via build config 00:01:39.853 member: explicitly disabled via build config 00:01:39.853 pcapng: explicitly disabled via build config 00:01:39.853 rawdev: explicitly disabled via build config 00:01:39.853 regexdev: explicitly disabled via build config 00:01:39.853 mldev: explicitly disabled via build config 00:01:39.853 rib: explicitly disabled via build config 00:01:39.853 sched: explicitly disabled via build config 00:01:39.853 stack: explicitly disabled via build config 00:01:39.853 ipsec: explicitly disabled via build config 00:01:39.853 pdcp: explicitly disabled via build config 00:01:39.853 fib: explicitly disabled via build config 00:01:39.853 port: explicitly disabled via build config 00:01:39.853 pdump: explicitly disabled via build config 00:01:39.853 table: explicitly disabled via build config 00:01:39.853 pipeline: explicitly disabled via build config 00:01:39.853 graph: explicitly disabled via build config 00:01:39.853 node: explicitly disabled via build config 00:01:39.853 00:01:39.853 drivers: 00:01:39.853 common/cpt: not in enabled drivers build config 00:01:39.853 common/dpaax: not in enabled drivers build config 00:01:39.854 common/iavf: not in enabled drivers build config 00:01:39.854 common/idpf: not in enabled drivers build config 00:01:39.854 common/mvep: not in enabled drivers build config 00:01:39.854 common/octeontx: not in enabled drivers build config 00:01:39.854 bus/auxiliary: not in enabled drivers build config 00:01:39.854 bus/cdx: not in enabled drivers build config 00:01:39.854 bus/dpaa: not in enabled drivers build config 00:01:39.854 bus/fslmc: not in enabled drivers build config 00:01:39.854 bus/ifpga: not in enabled drivers build config 00:01:39.854 bus/platform: not in enabled drivers build config 00:01:39.854 bus/vmbus: not in enabled drivers build config 00:01:39.854 common/cnxk: not in enabled drivers build config 00:01:39.854 common/mlx5: not in enabled drivers build config 00:01:39.854 common/nfp: not in enabled drivers build config 00:01:39.854 common/qat: not in enabled drivers build config 00:01:39.854 common/sfc_efx: not in enabled drivers build config 00:01:39.854 mempool/bucket: not in enabled drivers build config 00:01:39.854 mempool/cnxk: not in enabled drivers build config 00:01:39.854 mempool/dpaa: not in enabled drivers build config 00:01:39.854 mempool/dpaa2: not in enabled drivers build config 00:01:39.854 mempool/octeontx: not in enabled drivers build config 00:01:39.854 mempool/stack: not in enabled drivers build config 00:01:39.854 dma/cnxk: not in enabled drivers build config 00:01:39.854 dma/dpaa: not in enabled drivers build config 00:01:39.854 dma/dpaa2: not in enabled drivers build config 00:01:39.854 dma/hisilicon: not in enabled drivers build config 00:01:39.854 dma/idxd: not in enabled drivers build config 00:01:39.854 dma/ioat: not in enabled drivers build config 00:01:39.854 dma/skeleton: not in enabled drivers build config 00:01:39.854 net/af_packet: not in enabled drivers build config 00:01:39.854 net/af_xdp: not in enabled drivers build config 00:01:39.854 net/ark: not in enabled drivers build config 00:01:39.854 net/atlantic: not in enabled drivers build config 00:01:39.854 net/avp: not in enabled drivers build config 00:01:39.854 net/axgbe: not in enabled drivers build config 00:01:39.854 net/bnx2x: not in enabled drivers build config 00:01:39.854 net/bnxt: not in enabled drivers build config 00:01:39.854 net/bonding: not in enabled drivers build config 00:01:39.854 net/cnxk: not in enabled drivers build config 00:01:39.854 net/cpfl: not in enabled drivers build config 00:01:39.854 net/cxgbe: not in enabled drivers build config 00:01:39.854 net/dpaa: not in enabled drivers build config 00:01:39.854 net/dpaa2: not in enabled drivers build config 00:01:39.854 net/e1000: not in enabled drivers build config 00:01:39.854 net/ena: not in enabled drivers build config 00:01:39.854 net/enetc: not in enabled drivers build config 00:01:39.854 net/enetfec: not in enabled drivers build config 00:01:39.854 net/enic: not in enabled drivers build config 00:01:39.854 net/failsafe: not in enabled drivers build config 00:01:39.854 net/fm10k: not in enabled drivers build config 00:01:39.854 net/gve: not in enabled drivers build config 00:01:39.854 net/hinic: not in enabled drivers build config 00:01:39.854 net/hns3: not in enabled drivers build config 00:01:39.854 net/i40e: not in enabled drivers build config 00:01:39.854 net/iavf: not in enabled drivers build config 00:01:39.854 net/ice: not in enabled drivers build config 00:01:39.854 net/idpf: not in enabled drivers build config 00:01:39.854 net/igc: not in enabled drivers build config 00:01:39.854 net/ionic: not in enabled drivers build config 00:01:39.854 net/ipn3ke: not in enabled drivers build config 00:01:39.854 net/ixgbe: not in enabled drivers build config 00:01:39.854 net/mana: not in enabled drivers build config 00:01:39.854 net/memif: not in enabled drivers build config 00:01:39.854 net/mlx4: not in enabled drivers build config 00:01:39.854 net/mlx5: not in enabled drivers build config 00:01:39.854 net/mvneta: not in enabled drivers build config 00:01:39.854 net/mvpp2: not in enabled drivers build config 00:01:39.854 net/netvsc: not in enabled drivers build config 00:01:39.854 net/nfb: not in enabled drivers build config 00:01:39.854 net/nfp: not in enabled drivers build config 00:01:39.854 net/ngbe: not in enabled drivers build config 00:01:39.854 net/null: not in enabled drivers build config 00:01:39.854 net/octeontx: not in enabled drivers build config 00:01:39.854 net/octeon_ep: not in enabled drivers build config 00:01:39.854 net/pcap: not in enabled drivers build config 00:01:39.854 net/pfe: not in enabled drivers build config 00:01:39.854 net/qede: not in enabled drivers build config 00:01:39.854 net/ring: not in enabled drivers build config 00:01:39.854 net/sfc: not in enabled drivers build config 00:01:39.854 net/softnic: not in enabled drivers build config 00:01:39.854 net/tap: not in enabled drivers build config 00:01:39.854 net/thunderx: not in enabled drivers build config 00:01:39.854 net/txgbe: not in enabled drivers build config 00:01:39.854 net/vdev_netvsc: not in enabled drivers build config 00:01:39.854 net/vhost: not in enabled drivers build config 00:01:39.854 net/virtio: not in enabled drivers build config 00:01:39.854 net/vmxnet3: not in enabled drivers build config 00:01:39.854 raw/*: missing internal dependency, "rawdev" 00:01:39.854 crypto/armv8: not in enabled drivers build config 00:01:39.854 crypto/bcmfs: not in enabled drivers build config 00:01:39.854 crypto/caam_jr: not in enabled drivers build config 00:01:39.854 crypto/ccp: not in enabled drivers build config 00:01:39.854 crypto/cnxk: not in enabled drivers build config 00:01:39.854 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.854 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.854 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.854 crypto/mlx5: not in enabled drivers build config 00:01:39.854 crypto/mvsam: not in enabled drivers build config 00:01:39.854 crypto/nitrox: not in enabled drivers build config 00:01:39.854 crypto/null: not in enabled drivers build config 00:01:39.854 crypto/octeontx: not in enabled drivers build config 00:01:39.854 crypto/openssl: not in enabled drivers build config 00:01:39.854 crypto/scheduler: not in enabled drivers build config 00:01:39.854 crypto/uadk: not in enabled drivers build config 00:01:39.854 crypto/virtio: not in enabled drivers build config 00:01:39.854 compress/isal: not in enabled drivers build config 00:01:39.854 compress/mlx5: not in enabled drivers build config 00:01:39.854 compress/octeontx: not in enabled drivers build config 00:01:39.854 compress/zlib: not in enabled drivers build config 00:01:39.854 regex/*: missing internal dependency, "regexdev" 00:01:39.854 ml/*: missing internal dependency, "mldev" 00:01:39.854 vdpa/ifc: not in enabled drivers build config 00:01:39.854 vdpa/mlx5: not in enabled drivers build config 00:01:39.854 vdpa/nfp: not in enabled drivers build config 00:01:39.854 vdpa/sfc: not in enabled drivers build config 00:01:39.854 event/*: missing internal dependency, "eventdev" 00:01:39.854 baseband/*: missing internal dependency, "bbdev" 00:01:39.854 gpu/*: missing internal dependency, "gpudev" 00:01:39.854 00:01:39.854 00:01:39.854 Build targets in project: 84 00:01:39.854 00:01:39.854 DPDK 23.11.0 00:01:39.854 00:01:39.854 User defined options 00:01:39.854 buildtype : debug 00:01:39.854 default_library : shared 00:01:39.854 libdir : lib 00:01:39.854 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.854 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.854 c_link_args : 00:01:39.854 cpu_instruction_set: native 00:01:39.854 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:39.854 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:39.854 enable_docs : false 00:01:39.854 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.854 enable_kmods : false 00:01:39.854 tests : false 00:01:39.854 00:01:39.854 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.854 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.854 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.854 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.854 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.854 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.854 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.854 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.854 [7/264] Linking static target lib/librte_kvargs.a 00:01:39.854 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.854 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.854 [10/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.854 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.854 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.854 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.854 [14/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.854 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.854 [16/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.854 [17/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.854 [18/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.113 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.113 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.113 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:40.113 [22/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.113 [23/264] Linking static target lib/librte_log.a 00:01:40.113 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.113 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.113 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.113 [27/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.113 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.113 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.113 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.113 [31/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.113 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.113 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.113 [34/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.113 [35/264] Linking static target lib/librte_pci.a 00:01:40.113 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.113 [37/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.113 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.113 [39/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.113 [40/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.113 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.113 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.113 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.370 [44/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.370 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.370 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.370 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.370 [48/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.370 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.370 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.370 [51/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.370 [52/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.370 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.370 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.370 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.370 [56/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.370 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.370 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.370 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.370 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.370 [61/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.370 [62/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.370 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.370 [64/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.370 [65/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.370 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.370 [67/264] Linking static target lib/librte_meter.a 00:01:40.370 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.370 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.370 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.370 [71/264] Linking static target lib/librte_timer.a 00:01:40.370 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.370 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.370 [74/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.370 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.370 [76/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.370 [77/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.370 [78/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.370 [79/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.370 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.370 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.370 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.370 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.370 [84/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.370 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.370 [86/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.370 [87/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.370 [88/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.370 [89/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.370 [90/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.370 [91/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.370 [92/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.370 [93/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.370 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.370 [95/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.370 [96/264] Linking static target lib/librte_ring.a 00:01:40.370 [97/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.370 [98/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.370 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.370 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.370 [101/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.370 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.370 [103/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.370 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.370 [105/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.370 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:40.370 [107/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.370 [108/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.370 [109/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.370 [110/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.370 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.370 [112/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.370 [113/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.630 [114/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.630 [115/264] Linking static target lib/librte_telemetry.a 00:01:40.630 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.630 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.630 [118/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.630 [119/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.631 [120/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.631 [121/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.631 [122/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.631 [123/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.631 [124/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.631 [125/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.631 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.631 [127/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.631 [128/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.631 [129/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.631 [130/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.631 [131/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.631 [132/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.631 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.631 [134/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.631 [135/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.631 [136/264] Linking static target lib/librte_dmadev.a 00:01:40.631 [137/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.631 [138/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.631 [139/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.631 [140/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.631 [141/264] Linking static target lib/librte_cmdline.a 00:01:40.631 [142/264] Linking static target lib/librte_rcu.a 00:01:40.631 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.631 [144/264] Linking static target lib/librte_mempool.a 00:01:40.631 [145/264] Linking static target lib/librte_compressdev.a 00:01:40.631 [146/264] Linking static target lib/librte_security.a 00:01:40.631 [147/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.631 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.631 [149/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.631 [150/264] Linking static target lib/librte_net.a 00:01:40.631 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.631 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.631 [153/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.631 [154/264] Linking target lib/librte_log.so.24.0 00:01:40.631 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.631 [156/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.631 [157/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.631 [158/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.631 [159/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.631 [160/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.631 [161/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.631 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.631 [163/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.631 [164/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.631 [165/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.631 [166/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.631 [167/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.631 [168/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.631 [169/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.631 [170/264] Linking static target lib/librte_reorder.a 00:01:40.631 [171/264] Linking static target lib/librte_power.a 00:01:40.631 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.631 [173/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.631 [174/264] Linking static target lib/librte_eal.a 00:01:40.631 [175/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.631 [176/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.631 [177/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.631 [178/264] Linking static target drivers/librte_bus_vdev.a 00:01:40.631 [179/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.631 [180/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:40.631 [181/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.631 [182/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.892 [183/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.892 [184/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.892 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.892 [186/264] Linking static target lib/librte_mbuf.a 00:01:40.892 [187/264] Linking target lib/librte_kvargs.so.24.0 00:01:40.892 [188/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.892 [189/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.892 [190/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.892 [191/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.892 [192/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.892 [193/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.892 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.892 [195/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.892 [196/264] Linking static target drivers/librte_bus_pci.a 00:01:40.892 [197/264] Linking static target lib/librte_hash.a 00:01:40.892 [198/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.892 [199/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.892 [200/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:40.892 [201/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.892 [202/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.892 [203/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.892 [204/264] Linking static target drivers/librte_mempool_ring.a 00:01:41.152 [205/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.152 [206/264] Linking static target lib/librte_cryptodev.a 00:01:41.152 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.152 [208/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.152 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.152 [210/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.152 [211/264] Linking target lib/librte_telemetry.so.24.0 00:01:41.152 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.152 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:41.152 [214/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.413 [215/264] Linking static target lib/librte_ethdev.a 00:01:41.413 [216/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.413 [217/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.673 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.673 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.934 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.194 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:42.455 [225/264] Linking static target lib/librte_vhost.a 00:01:43.027 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.412 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.001 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.384 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.384 [230/264] Linking target lib/librte_eal.so.24.0 00:01:52.384 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:52.384 [232/264] Linking target lib/librte_ring.so.24.0 00:01:52.384 [233/264] Linking target lib/librte_meter.so.24.0 00:01:52.384 [234/264] Linking target lib/librte_timer.so.24.0 00:01:52.384 [235/264] Linking target lib/librte_dmadev.so.24.0 00:01:52.384 [236/264] Linking target lib/librte_pci.so.24.0 00:01:52.384 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:52.645 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:52.645 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:52.645 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:52.645 [241/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:52.645 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:52.645 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:52.645 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:52.645 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:52.905 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:52.905 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:52.905 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:52.905 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:52.905 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:53.166 [251/264] Linking target lib/librte_net.so.24.0 00:01:53.166 [252/264] Linking target lib/librte_reorder.so.24.0 00:01:53.166 [253/264] Linking target lib/librte_compressdev.so.24.0 00:01:53.166 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:53.166 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:53.166 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:53.166 [257/264] Linking target lib/librte_hash.so.24.0 00:01:53.166 [258/264] Linking target lib/librte_cmdline.so.24.0 00:01:53.166 [259/264] Linking target lib/librte_security.so.24.0 00:01:53.428 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:53.428 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:53.428 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:53.428 [263/264] Linking target lib/librte_power.so.24.0 00:01:53.428 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:53.428 INFO: autodetecting backend as ninja 00:01:53.428 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:54.813 CC lib/ut_mock/mock.o 00:01:54.813 CC lib/log/log.o 00:01:54.813 CC lib/log/log_flags.o 00:01:54.813 CC lib/log/log_deprecated.o 00:01:54.813 CC lib/ut/ut.o 00:01:54.813 LIB libspdk_ut_mock.a 00:01:54.813 LIB libspdk_log.a 00:01:54.813 LIB libspdk_ut.a 00:01:54.813 SO libspdk_ut_mock.so.6.0 00:01:54.813 SO libspdk_ut.so.2.0 00:01:54.813 SO libspdk_log.so.7.0 00:01:54.813 SYMLINK libspdk_ut_mock.so 00:01:54.813 SYMLINK libspdk_ut.so 00:01:54.813 SYMLINK libspdk_log.so 00:01:55.074 CC lib/util/base64.o 00:01:55.074 CC lib/util/bit_array.o 00:01:55.074 CC lib/util/cpuset.o 00:01:55.074 CC lib/util/crc16.o 00:01:55.074 CC lib/util/crc32.o 00:01:55.074 CC lib/dma/dma.o 00:01:55.074 CC lib/util/crc32c.o 00:01:55.074 CC lib/util/crc32_ieee.o 00:01:55.335 CC lib/util/crc64.o 00:01:55.335 CC lib/util/dif.o 00:01:55.335 CC lib/util/fd.o 00:01:55.335 CC lib/util/file.o 00:01:55.335 CC lib/util/hexlify.o 00:01:55.335 CC lib/util/iov.o 00:01:55.335 CC lib/util/math.o 00:01:55.335 CC lib/util/pipe.o 00:01:55.335 CC lib/util/strerror_tls.o 00:01:55.335 CC lib/util/string.o 00:01:55.335 CC lib/util/uuid.o 00:01:55.335 CC lib/util/fd_group.o 00:01:55.335 CC lib/util/xor.o 00:01:55.335 CC lib/util/zipf.o 00:01:55.335 CXX lib/trace_parser/trace.o 00:01:55.335 CC lib/ioat/ioat.o 00:01:55.335 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.335 CC lib/vfio_user/host/vfio_user.o 00:01:55.335 LIB libspdk_dma.a 00:01:55.335 SO libspdk_dma.so.4.0 00:01:55.597 LIB libspdk_ioat.a 00:01:55.597 SYMLINK libspdk_dma.so 00:01:55.597 SO libspdk_ioat.so.7.0 00:01:55.597 LIB libspdk_vfio_user.a 00:01:55.597 SYMLINK libspdk_ioat.so 00:01:55.597 SO libspdk_vfio_user.so.5.0 00:01:55.597 LIB libspdk_util.a 00:01:55.597 SYMLINK libspdk_vfio_user.so 00:01:55.858 SO libspdk_util.so.9.0 00:01:55.858 SYMLINK libspdk_util.so 00:01:56.120 LIB libspdk_trace_parser.a 00:01:56.120 SO libspdk_trace_parser.so.5.0 00:01:56.120 SYMLINK libspdk_trace_parser.so 00:01:56.120 CC lib/conf/conf.o 00:01:56.381 CC lib/json/json_parse.o 00:01:56.381 CC lib/rdma/common.o 00:01:56.381 CC lib/json/json_util.o 00:01:56.381 CC lib/vmd/vmd.o 00:01:56.381 CC lib/rdma/rdma_verbs.o 00:01:56.381 CC lib/json/json_write.o 00:01:56.381 CC lib/vmd/led.o 00:01:56.381 CC lib/env_dpdk/env.o 00:01:56.381 CC lib/env_dpdk/memory.o 00:01:56.381 CC lib/env_dpdk/pci.o 00:01:56.381 CC lib/env_dpdk/init.o 00:01:56.381 CC lib/env_dpdk/threads.o 00:01:56.381 CC lib/idxd/idxd.o 00:01:56.381 CC lib/env_dpdk/pci_ioat.o 00:01:56.381 CC lib/env_dpdk/pci_idxd.o 00:01:56.381 CC lib/env_dpdk/pci_virtio.o 00:01:56.381 CC lib/idxd/idxd_user.o 00:01:56.381 CC lib/env_dpdk/pci_vmd.o 00:01:56.381 CC lib/env_dpdk/pci_event.o 00:01:56.381 CC lib/env_dpdk/sigbus_handler.o 00:01:56.381 CC lib/env_dpdk/pci_dpdk.o 00:01:56.381 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:56.381 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.381 LIB libspdk_conf.a 00:01:56.381 SO libspdk_conf.so.6.0 00:01:56.643 LIB libspdk_rdma.a 00:01:56.643 LIB libspdk_json.a 00:01:56.643 SYMLINK libspdk_conf.so 00:01:56.643 SO libspdk_rdma.so.6.0 00:01:56.643 SO libspdk_json.so.6.0 00:01:56.643 SYMLINK libspdk_rdma.so 00:01:56.643 SYMLINK libspdk_json.so 00:01:56.905 LIB libspdk_idxd.a 00:01:56.905 SO libspdk_idxd.so.12.0 00:01:56.905 LIB libspdk_vmd.a 00:01:56.905 SO libspdk_vmd.so.6.0 00:01:56.905 SYMLINK libspdk_idxd.so 00:01:56.905 SYMLINK libspdk_vmd.so 00:01:57.166 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.166 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.166 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.166 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:57.166 LIB libspdk_jsonrpc.a 00:01:57.427 SO libspdk_jsonrpc.so.6.0 00:01:57.427 SYMLINK libspdk_jsonrpc.so 00:01:57.427 LIB libspdk_env_dpdk.a 00:01:57.427 SO libspdk_env_dpdk.so.14.0 00:01:57.689 SYMLINK libspdk_env_dpdk.so 00:01:57.689 CC lib/rpc/rpc.o 00:01:57.953 LIB libspdk_rpc.a 00:01:57.953 SO libspdk_rpc.so.6.0 00:01:58.216 SYMLINK libspdk_rpc.so 00:01:58.506 CC lib/notify/notify.o 00:01:58.506 CC lib/trace/trace.o 00:01:58.506 CC lib/notify/notify_rpc.o 00:01:58.506 CC lib/trace/trace_flags.o 00:01:58.506 CC lib/keyring/keyring.o 00:01:58.506 CC lib/trace/trace_rpc.o 00:01:58.506 CC lib/keyring/keyring_rpc.o 00:01:58.772 LIB libspdk_notify.a 00:01:58.772 SO libspdk_notify.so.6.0 00:01:58.772 LIB libspdk_keyring.a 00:01:58.772 LIB libspdk_trace.a 00:01:58.772 SO libspdk_keyring.so.1.0 00:01:58.772 SYMLINK libspdk_notify.so 00:01:58.772 SO libspdk_trace.so.10.0 00:01:58.772 SYMLINK libspdk_keyring.so 00:01:58.772 SYMLINK libspdk_trace.so 00:01:59.343 CC lib/sock/sock.o 00:01:59.344 CC lib/sock/sock_rpc.o 00:01:59.344 CC lib/thread/thread.o 00:01:59.344 CC lib/thread/iobuf.o 00:01:59.604 LIB libspdk_sock.a 00:01:59.604 SO libspdk_sock.so.9.0 00:01:59.604 SYMLINK libspdk_sock.so 00:02:00.177 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.177 CC lib/nvme/nvme_ctrlr.o 00:02:00.177 CC lib/nvme/nvme_fabric.o 00:02:00.177 CC lib/nvme/nvme_ns_cmd.o 00:02:00.177 CC lib/nvme/nvme_ns.o 00:02:00.177 CC lib/nvme/nvme_pcie_common.o 00:02:00.177 CC lib/nvme/nvme_pcie.o 00:02:00.177 CC lib/nvme/nvme_qpair.o 00:02:00.177 CC lib/nvme/nvme.o 00:02:00.177 CC lib/nvme/nvme_quirks.o 00:02:00.177 CC lib/nvme/nvme_transport.o 00:02:00.177 CC lib/nvme/nvme_discovery.o 00:02:00.177 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.177 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.177 CC lib/nvme/nvme_tcp.o 00:02:00.177 CC lib/nvme/nvme_opal.o 00:02:00.177 CC lib/nvme/nvme_io_msg.o 00:02:00.177 CC lib/nvme/nvme_poll_group.o 00:02:00.177 CC lib/nvme/nvme_zns.o 00:02:00.177 CC lib/nvme/nvme_stubs.o 00:02:00.177 CC lib/nvme/nvme_auth.o 00:02:00.177 CC lib/nvme/nvme_cuse.o 00:02:00.177 CC lib/nvme/nvme_rdma.o 00:02:00.437 LIB libspdk_thread.a 00:02:00.437 SO libspdk_thread.so.10.0 00:02:00.698 SYMLINK libspdk_thread.so 00:02:00.958 CC lib/accel/accel.o 00:02:00.958 CC lib/accel/accel_rpc.o 00:02:00.958 CC lib/accel/accel_sw.o 00:02:00.958 CC lib/blob/blobstore.o 00:02:00.958 CC lib/virtio/virtio.o 00:02:00.958 CC lib/blob/request.o 00:02:00.958 CC lib/virtio/virtio_vhost_user.o 00:02:00.958 CC lib/blob/zeroes.o 00:02:00.958 CC lib/virtio/virtio_vfio_user.o 00:02:00.958 CC lib/blob/blob_bs_dev.o 00:02:00.958 CC lib/virtio/virtio_pci.o 00:02:00.958 CC lib/init/json_config.o 00:02:00.958 CC lib/init/subsystem.o 00:02:00.958 CC lib/init/subsystem_rpc.o 00:02:00.958 CC lib/init/rpc.o 00:02:01.219 LIB libspdk_init.a 00:02:01.219 SO libspdk_init.so.5.0 00:02:01.219 LIB libspdk_virtio.a 00:02:01.219 SO libspdk_virtio.so.7.0 00:02:01.219 SYMLINK libspdk_init.so 00:02:01.480 SYMLINK libspdk_virtio.so 00:02:01.480 CC lib/event/app.o 00:02:01.480 CC lib/event/reactor.o 00:02:01.480 CC lib/event/log_rpc.o 00:02:01.480 CC lib/event/app_rpc.o 00:02:01.480 CC lib/event/scheduler_static.o 00:02:01.741 LIB libspdk_accel.a 00:02:01.741 SO libspdk_accel.so.15.0 00:02:01.741 LIB libspdk_nvme.a 00:02:01.741 SYMLINK libspdk_accel.so 00:02:02.003 SO libspdk_nvme.so.13.0 00:02:02.003 LIB libspdk_event.a 00:02:02.003 SO libspdk_event.so.13.0 00:02:02.003 SYMLINK libspdk_event.so 00:02:02.265 SYMLINK libspdk_nvme.so 00:02:02.265 CC lib/bdev/bdev.o 00:02:02.265 CC lib/bdev/bdev_rpc.o 00:02:02.265 CC lib/bdev/bdev_zone.o 00:02:02.265 CC lib/bdev/part.o 00:02:02.265 CC lib/bdev/scsi_nvme.o 00:02:03.654 LIB libspdk_blob.a 00:02:03.654 SO libspdk_blob.so.11.0 00:02:03.654 SYMLINK libspdk_blob.so 00:02:03.915 CC lib/blobfs/blobfs.o 00:02:03.915 CC lib/lvol/lvol.o 00:02:03.915 CC lib/blobfs/tree.o 00:02:04.488 LIB libspdk_bdev.a 00:02:04.488 SO libspdk_bdev.so.15.0 00:02:04.488 SYMLINK libspdk_bdev.so 00:02:04.488 LIB libspdk_blobfs.a 00:02:04.750 SO libspdk_blobfs.so.10.0 00:02:04.750 LIB libspdk_lvol.a 00:02:04.750 SO libspdk_lvol.so.10.0 00:02:04.750 SYMLINK libspdk_blobfs.so 00:02:04.750 SYMLINK libspdk_lvol.so 00:02:05.009 CC lib/scsi/dev.o 00:02:05.009 CC lib/scsi/lun.o 00:02:05.009 CC lib/scsi/port.o 00:02:05.009 CC lib/scsi/scsi.o 00:02:05.009 CC lib/scsi/scsi_bdev.o 00:02:05.009 CC lib/scsi/scsi_pr.o 00:02:05.009 CC lib/scsi/scsi_rpc.o 00:02:05.009 CC lib/scsi/task.o 00:02:05.009 CC lib/nvmf/ctrlr.o 00:02:05.009 CC lib/nvmf/ctrlr_discovery.o 00:02:05.009 CC lib/nvmf/ctrlr_bdev.o 00:02:05.009 CC lib/nvmf/subsystem.o 00:02:05.009 CC lib/nvmf/nvmf.o 00:02:05.009 CC lib/nvmf/tcp.o 00:02:05.009 CC lib/nvmf/nvmf_rpc.o 00:02:05.009 CC lib/nvmf/transport.o 00:02:05.009 CC lib/nvmf/stubs.o 00:02:05.009 CC lib/nvmf/rdma.o 00:02:05.009 CC lib/nvmf/auth.o 00:02:05.009 CC lib/nbd/nbd.o 00:02:05.009 CC lib/ublk/ublk.o 00:02:05.009 CC lib/ftl/ftl_init.o 00:02:05.009 CC lib/nbd/nbd_rpc.o 00:02:05.009 CC lib/ublk/ublk_rpc.o 00:02:05.009 CC lib/ftl/ftl_core.o 00:02:05.009 CC lib/ftl/ftl_layout.o 00:02:05.009 CC lib/ftl/ftl_debug.o 00:02:05.009 CC lib/ftl/ftl_io.o 00:02:05.009 CC lib/ftl/ftl_sb.o 00:02:05.009 CC lib/ftl/ftl_l2p.o 00:02:05.009 CC lib/ftl/ftl_l2p_flat.o 00:02:05.009 CC lib/ftl/ftl_nv_cache.o 00:02:05.009 CC lib/ftl/ftl_band.o 00:02:05.009 CC lib/ftl/ftl_band_ops.o 00:02:05.009 CC lib/ftl/ftl_writer.o 00:02:05.009 CC lib/ftl/ftl_rq.o 00:02:05.009 CC lib/ftl/ftl_reloc.o 00:02:05.009 CC lib/ftl/ftl_l2p_cache.o 00:02:05.009 CC lib/ftl/ftl_p2l.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:05.009 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:05.009 CC lib/ftl/utils/ftl_mempool.o 00:02:05.009 CC lib/ftl/utils/ftl_conf.o 00:02:05.009 CC lib/ftl/utils/ftl_md.o 00:02:05.009 CC lib/ftl/utils/ftl_bitmap.o 00:02:05.009 CC lib/ftl/utils/ftl_property.o 00:02:05.009 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:05.009 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:05.009 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:05.009 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:05.009 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:05.009 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:05.009 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:05.009 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:05.009 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:05.009 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:05.009 CC lib/ftl/base/ftl_base_dev.o 00:02:05.009 CC lib/ftl/base/ftl_base_bdev.o 00:02:05.009 CC lib/ftl/ftl_trace.o 00:02:05.268 LIB libspdk_nbd.a 00:02:05.529 LIB libspdk_scsi.a 00:02:05.529 SO libspdk_nbd.so.7.0 00:02:05.529 SO libspdk_scsi.so.9.0 00:02:05.529 SYMLINK libspdk_nbd.so 00:02:05.529 SYMLINK libspdk_scsi.so 00:02:05.529 LIB libspdk_ublk.a 00:02:05.529 SO libspdk_ublk.so.3.0 00:02:05.790 SYMLINK libspdk_ublk.so 00:02:05.790 LIB libspdk_ftl.a 00:02:05.790 CC lib/vhost/vhost.o 00:02:05.790 CC lib/vhost/vhost_rpc.o 00:02:05.790 CC lib/vhost/vhost_scsi.o 00:02:05.790 CC lib/vhost/vhost_blk.o 00:02:05.790 CC lib/vhost/rte_vhost_user.o 00:02:06.051 CC lib/iscsi/conn.o 00:02:06.051 CC lib/iscsi/iscsi.o 00:02:06.051 CC lib/iscsi/init_grp.o 00:02:06.051 CC lib/iscsi/md5.o 00:02:06.051 CC lib/iscsi/param.o 00:02:06.051 CC lib/iscsi/portal_grp.o 00:02:06.051 CC lib/iscsi/iscsi_rpc.o 00:02:06.051 CC lib/iscsi/tgt_node.o 00:02:06.051 CC lib/iscsi/iscsi_subsystem.o 00:02:06.051 CC lib/iscsi/task.o 00:02:06.051 SO libspdk_ftl.so.9.0 00:02:06.312 SYMLINK libspdk_ftl.so 00:02:06.573 LIB libspdk_nvmf.a 00:02:06.834 SO libspdk_nvmf.so.18.0 00:02:06.834 LIB libspdk_vhost.a 00:02:06.834 SO libspdk_vhost.so.8.0 00:02:06.834 SYMLINK libspdk_nvmf.so 00:02:07.095 SYMLINK libspdk_vhost.so 00:02:07.095 LIB libspdk_iscsi.a 00:02:07.095 SO libspdk_iscsi.so.8.0 00:02:07.356 SYMLINK libspdk_iscsi.so 00:02:07.928 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.928 LIB libspdk_env_dpdk_rpc.a 00:02:07.928 CC module/accel/iaa/accel_iaa.o 00:02:07.928 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.928 CC module/sock/posix/posix.o 00:02:07.928 CC module/blob/bdev/blob_bdev.o 00:02:07.928 CC module/accel/ioat/accel_ioat.o 00:02:07.928 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.928 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.928 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.928 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.928 CC module/keyring/file/keyring.o 00:02:07.928 CC module/accel/error/accel_error.o 00:02:07.928 CC module/accel/dsa/accel_dsa.o 00:02:07.928 CC module/keyring/file/keyring_rpc.o 00:02:07.928 CC module/accel/error/accel_error_rpc.o 00:02:07.928 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.928 SO libspdk_env_dpdk_rpc.so.6.0 00:02:08.188 SYMLINK libspdk_env_dpdk_rpc.so 00:02:08.188 LIB libspdk_scheduler_gscheduler.a 00:02:08.188 LIB libspdk_keyring_file.a 00:02:08.188 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.188 LIB libspdk_accel_iaa.a 00:02:08.188 LIB libspdk_accel_error.a 00:02:08.188 LIB libspdk_accel_ioat.a 00:02:08.188 LIB libspdk_scheduler_dynamic.a 00:02:08.188 SO libspdk_scheduler_gscheduler.so.4.0 00:02:08.188 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:08.188 SO libspdk_keyring_file.so.1.0 00:02:08.188 SO libspdk_accel_iaa.so.3.0 00:02:08.188 LIB libspdk_accel_dsa.a 00:02:08.188 SO libspdk_accel_error.so.2.0 00:02:08.188 SO libspdk_scheduler_dynamic.so.4.0 00:02:08.188 SO libspdk_accel_ioat.so.6.0 00:02:08.188 LIB libspdk_blob_bdev.a 00:02:08.188 SYMLINK libspdk_scheduler_gscheduler.so 00:02:08.188 SO libspdk_accel_dsa.so.5.0 00:02:08.188 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:08.188 SYMLINK libspdk_keyring_file.so 00:02:08.188 SO libspdk_blob_bdev.so.11.0 00:02:08.188 SYMLINK libspdk_accel_iaa.so 00:02:08.188 SYMLINK libspdk_accel_ioat.so 00:02:08.188 SYMLINK libspdk_scheduler_dynamic.so 00:02:08.448 SYMLINK libspdk_accel_error.so 00:02:08.448 SYMLINK libspdk_accel_dsa.so 00:02:08.448 SYMLINK libspdk_blob_bdev.so 00:02:08.709 LIB libspdk_sock_posix.a 00:02:08.709 SO libspdk_sock_posix.so.6.0 00:02:08.709 SYMLINK libspdk_sock_posix.so 00:02:08.970 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.971 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.971 CC module/bdev/gpt/gpt.o 00:02:08.971 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.971 CC module/bdev/malloc/bdev_malloc.o 00:02:08.971 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.971 CC module/bdev/delay/vbdev_delay.o 00:02:08.971 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.971 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.971 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.971 CC module/bdev/ftl/bdev_ftl.o 00:02:08.971 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.971 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.971 CC module/bdev/error/vbdev_error.o 00:02:08.971 CC module/bdev/nvme/bdev_nvme.o 00:02:08.971 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.971 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.971 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.971 CC module/bdev/nvme/nvme_rpc.o 00:02:08.971 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.971 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.971 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.971 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.971 CC module/bdev/nvme/vbdev_opal.o 00:02:08.971 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.971 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.971 CC module/bdev/split/vbdev_split.o 00:02:08.971 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.971 CC module/bdev/null/bdev_null.o 00:02:08.971 CC module/bdev/aio/bdev_aio.o 00:02:08.971 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.971 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.971 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.971 CC module/bdev/null/bdev_null_rpc.o 00:02:08.971 CC module/bdev/raid/bdev_raid.o 00:02:08.971 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.971 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.971 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.971 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.971 CC module/bdev/raid/raid0.o 00:02:08.971 CC module/bdev/raid/raid1.o 00:02:08.971 CC module/bdev/raid/concat.o 00:02:09.231 LIB libspdk_blobfs_bdev.a 00:02:09.231 SO libspdk_blobfs_bdev.so.6.0 00:02:09.231 LIB libspdk_bdev_passthru.a 00:02:09.231 LIB libspdk_bdev_split.a 00:02:09.231 LIB libspdk_bdev_gpt.a 00:02:09.231 SYMLINK libspdk_blobfs_bdev.so 00:02:09.231 SO libspdk_bdev_passthru.so.6.0 00:02:09.231 LIB libspdk_bdev_null.a 00:02:09.231 SO libspdk_bdev_split.so.6.0 00:02:09.231 LIB libspdk_bdev_error.a 00:02:09.231 LIB libspdk_bdev_ftl.a 00:02:09.231 SO libspdk_bdev_gpt.so.6.0 00:02:09.231 SO libspdk_bdev_null.so.6.0 00:02:09.231 LIB libspdk_bdev_malloc.a 00:02:09.231 SO libspdk_bdev_error.so.6.0 00:02:09.231 LIB libspdk_bdev_delay.a 00:02:09.231 SO libspdk_bdev_ftl.so.6.0 00:02:09.231 LIB libspdk_bdev_zone_block.a 00:02:09.231 SYMLINK libspdk_bdev_passthru.so 00:02:09.231 LIB libspdk_bdev_aio.a 00:02:09.231 SYMLINK libspdk_bdev_split.so 00:02:09.231 SYMLINK libspdk_bdev_gpt.so 00:02:09.231 SO libspdk_bdev_malloc.so.6.0 00:02:09.231 SYMLINK libspdk_bdev_null.so 00:02:09.231 LIB libspdk_bdev_iscsi.a 00:02:09.231 SO libspdk_bdev_delay.so.6.0 00:02:09.231 SO libspdk_bdev_zone_block.so.6.0 00:02:09.231 SO libspdk_bdev_aio.so.6.0 00:02:09.231 SYMLINK libspdk_bdev_error.so 00:02:09.231 SYMLINK libspdk_bdev_ftl.so 00:02:09.231 SO libspdk_bdev_iscsi.so.6.0 00:02:09.492 SYMLINK libspdk_bdev_zone_block.so 00:02:09.492 SYMLINK libspdk_bdev_malloc.so 00:02:09.492 SYMLINK libspdk_bdev_aio.so 00:02:09.492 SYMLINK libspdk_bdev_delay.so 00:02:09.492 LIB libspdk_bdev_virtio.a 00:02:09.492 LIB libspdk_bdev_lvol.a 00:02:09.492 SYMLINK libspdk_bdev_iscsi.so 00:02:09.492 SO libspdk_bdev_lvol.so.6.0 00:02:09.492 SO libspdk_bdev_virtio.so.6.0 00:02:09.492 SYMLINK libspdk_bdev_lvol.so 00:02:09.492 SYMLINK libspdk_bdev_virtio.so 00:02:09.752 LIB libspdk_bdev_raid.a 00:02:09.752 SO libspdk_bdev_raid.so.6.0 00:02:09.752 SYMLINK libspdk_bdev_raid.so 00:02:10.693 LIB libspdk_bdev_nvme.a 00:02:10.693 SO libspdk_bdev_nvme.so.7.0 00:02:10.954 SYMLINK libspdk_bdev_nvme.so 00:02:11.526 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.526 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.526 CC module/event/subsystems/keyring/keyring.o 00:02:11.526 CC module/event/subsystems/sock/sock.o 00:02:11.526 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.526 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.526 CC module/event/subsystems/vmd/vmd.o 00:02:11.526 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.787 LIB libspdk_event_keyring.a 00:02:11.787 LIB libspdk_event_sock.a 00:02:11.787 LIB libspdk_event_iobuf.a 00:02:11.787 LIB libspdk_event_vhost_blk.a 00:02:11.787 SO libspdk_event_keyring.so.1.0 00:02:11.787 LIB libspdk_event_vmd.a 00:02:11.787 LIB libspdk_event_scheduler.a 00:02:11.787 SO libspdk_event_sock.so.5.0 00:02:11.787 SO libspdk_event_iobuf.so.3.0 00:02:11.787 SO libspdk_event_vhost_blk.so.3.0 00:02:11.787 SO libspdk_event_vmd.so.6.0 00:02:11.787 SO libspdk_event_scheduler.so.4.0 00:02:11.787 SYMLINK libspdk_event_keyring.so 00:02:11.787 SYMLINK libspdk_event_sock.so 00:02:11.787 SYMLINK libspdk_event_vhost_blk.so 00:02:11.787 SYMLINK libspdk_event_scheduler.so 00:02:11.787 SYMLINK libspdk_event_iobuf.so 00:02:11.787 SYMLINK libspdk_event_vmd.so 00:02:12.047 CC module/event/subsystems/accel/accel.o 00:02:12.308 LIB libspdk_event_accel.a 00:02:12.308 SO libspdk_event_accel.so.6.0 00:02:12.308 SYMLINK libspdk_event_accel.so 00:02:12.880 CC module/event/subsystems/bdev/bdev.o 00:02:12.880 LIB libspdk_event_bdev.a 00:02:12.880 SO libspdk_event_bdev.so.6.0 00:02:13.141 SYMLINK libspdk_event_bdev.so 00:02:13.403 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:13.403 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:13.403 CC module/event/subsystems/ublk/ublk.o 00:02:13.403 CC module/event/subsystems/nbd/nbd.o 00:02:13.403 CC module/event/subsystems/scsi/scsi.o 00:02:13.665 LIB libspdk_event_ublk.a 00:02:13.665 LIB libspdk_event_nbd.a 00:02:13.665 LIB libspdk_event_scsi.a 00:02:13.665 SO libspdk_event_ublk.so.3.0 00:02:13.665 SO libspdk_event_nbd.so.6.0 00:02:13.665 SO libspdk_event_scsi.so.6.0 00:02:13.665 LIB libspdk_event_nvmf.a 00:02:13.665 SYMLINK libspdk_event_ublk.so 00:02:13.665 SYMLINK libspdk_event_nbd.so 00:02:13.665 SO libspdk_event_nvmf.so.6.0 00:02:13.665 SYMLINK libspdk_event_scsi.so 00:02:13.665 SYMLINK libspdk_event_nvmf.so 00:02:13.925 CC module/event/subsystems/iscsi/iscsi.o 00:02:13.925 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:14.186 LIB libspdk_event_iscsi.a 00:02:14.186 LIB libspdk_event_vhost_scsi.a 00:02:14.186 SO libspdk_event_iscsi.so.6.0 00:02:14.186 SO libspdk_event_vhost_scsi.so.3.0 00:02:14.186 SYMLINK libspdk_event_vhost_scsi.so 00:02:14.186 SYMLINK libspdk_event_iscsi.so 00:02:14.447 SO libspdk.so.6.0 00:02:14.447 SYMLINK libspdk.so 00:02:14.706 TEST_HEADER include/spdk/accel.h 00:02:14.706 TEST_HEADER include/spdk/accel_module.h 00:02:14.986 TEST_HEADER include/spdk/assert.h 00:02:14.986 TEST_HEADER include/spdk/base64.h 00:02:14.986 TEST_HEADER include/spdk/bdev.h 00:02:14.986 TEST_HEADER include/spdk/barrier.h 00:02:14.986 TEST_HEADER include/spdk/bdev_module.h 00:02:14.986 CC app/trace_record/trace_record.o 00:02:14.986 TEST_HEADER include/spdk/bit_array.h 00:02:14.986 TEST_HEADER include/spdk/bdev_zone.h 00:02:14.986 TEST_HEADER include/spdk/bit_pool.h 00:02:14.986 TEST_HEADER include/spdk/blob_bdev.h 00:02:14.986 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:14.986 TEST_HEADER include/spdk/blobfs.h 00:02:14.986 CC app/spdk_nvme_identify/identify.o 00:02:14.986 TEST_HEADER include/spdk/blob.h 00:02:14.986 TEST_HEADER include/spdk/conf.h 00:02:14.986 TEST_HEADER include/spdk/cpuset.h 00:02:14.986 TEST_HEADER include/spdk/config.h 00:02:14.986 TEST_HEADER include/spdk/crc16.h 00:02:14.986 CC app/spdk_nvme_discover/discovery_aer.o 00:02:14.986 TEST_HEADER include/spdk/crc64.h 00:02:14.986 TEST_HEADER include/spdk/crc32.h 00:02:14.986 TEST_HEADER include/spdk/dif.h 00:02:14.986 CC app/spdk_nvme_perf/perf.o 00:02:14.986 TEST_HEADER include/spdk/dma.h 00:02:14.986 TEST_HEADER include/spdk/endian.h 00:02:14.986 CXX app/trace/trace.o 00:02:14.986 TEST_HEADER include/spdk/env_dpdk.h 00:02:14.986 TEST_HEADER include/spdk/event.h 00:02:14.986 TEST_HEADER include/spdk/env.h 00:02:14.986 CC app/spdk_lspci/spdk_lspci.o 00:02:14.986 CC app/spdk_top/spdk_top.o 00:02:14.986 TEST_HEADER include/spdk/fd_group.h 00:02:14.986 TEST_HEADER include/spdk/fd.h 00:02:14.986 TEST_HEADER include/spdk/file.h 00:02:14.986 CC test/rpc_client/rpc_client_test.o 00:02:14.986 TEST_HEADER include/spdk/ftl.h 00:02:14.986 TEST_HEADER include/spdk/hexlify.h 00:02:14.986 TEST_HEADER include/spdk/gpt_spec.h 00:02:14.986 TEST_HEADER include/spdk/histogram_data.h 00:02:14.986 TEST_HEADER include/spdk/idxd.h 00:02:14.986 TEST_HEADER include/spdk/idxd_spec.h 00:02:14.986 TEST_HEADER include/spdk/ioat.h 00:02:14.986 TEST_HEADER include/spdk/ioat_spec.h 00:02:14.986 TEST_HEADER include/spdk/iscsi_spec.h 00:02:14.986 TEST_HEADER include/spdk/init.h 00:02:14.986 TEST_HEADER include/spdk/json.h 00:02:14.986 TEST_HEADER include/spdk/jsonrpc.h 00:02:14.986 TEST_HEADER include/spdk/keyring.h 00:02:14.986 TEST_HEADER include/spdk/likely.h 00:02:14.986 TEST_HEADER include/spdk/keyring_module.h 00:02:14.986 TEST_HEADER include/spdk/lvol.h 00:02:14.986 TEST_HEADER include/spdk/memory.h 00:02:14.986 TEST_HEADER include/spdk/log.h 00:02:14.986 TEST_HEADER include/spdk/nbd.h 00:02:14.986 TEST_HEADER include/spdk/mmio.h 00:02:14.986 CC app/spdk_dd/spdk_dd.o 00:02:14.986 TEST_HEADER include/spdk/notify.h 00:02:14.986 TEST_HEADER include/spdk/nvme.h 00:02:14.986 TEST_HEADER include/spdk/nvme_intel.h 00:02:14.986 CC app/vhost/vhost.o 00:02:14.986 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:14.986 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:14.986 CC app/iscsi_tgt/iscsi_tgt.o 00:02:14.986 TEST_HEADER include/spdk/nvme_zns.h 00:02:14.986 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:14.986 TEST_HEADER include/spdk/nvme_spec.h 00:02:14.986 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:14.986 TEST_HEADER include/spdk/nvmf.h 00:02:14.986 TEST_HEADER include/spdk/nvmf_transport.h 00:02:14.986 CC app/nvmf_tgt/nvmf_main.o 00:02:14.986 TEST_HEADER include/spdk/nvmf_spec.h 00:02:14.986 TEST_HEADER include/spdk/opal.h 00:02:14.986 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:14.986 TEST_HEADER include/spdk/pci_ids.h 00:02:14.986 TEST_HEADER include/spdk/opal_spec.h 00:02:14.986 TEST_HEADER include/spdk/pipe.h 00:02:14.986 TEST_HEADER include/spdk/queue.h 00:02:14.986 TEST_HEADER include/spdk/rpc.h 00:02:14.986 TEST_HEADER include/spdk/scheduler.h 00:02:14.986 TEST_HEADER include/spdk/reduce.h 00:02:14.986 CC app/spdk_tgt/spdk_tgt.o 00:02:14.986 TEST_HEADER include/spdk/scsi_spec.h 00:02:14.986 TEST_HEADER include/spdk/scsi.h 00:02:14.986 TEST_HEADER include/spdk/sock.h 00:02:14.986 TEST_HEADER include/spdk/stdinc.h 00:02:14.986 TEST_HEADER include/spdk/string.h 00:02:14.986 TEST_HEADER include/spdk/thread.h 00:02:14.986 TEST_HEADER include/spdk/trace.h 00:02:14.986 TEST_HEADER include/spdk/trace_parser.h 00:02:14.986 TEST_HEADER include/spdk/tree.h 00:02:14.986 TEST_HEADER include/spdk/ublk.h 00:02:14.986 TEST_HEADER include/spdk/util.h 00:02:14.986 TEST_HEADER include/spdk/uuid.h 00:02:14.986 TEST_HEADER include/spdk/version.h 00:02:14.986 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:14.986 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:14.986 TEST_HEADER include/spdk/vhost.h 00:02:14.986 TEST_HEADER include/spdk/vmd.h 00:02:14.986 TEST_HEADER include/spdk/xor.h 00:02:14.986 TEST_HEADER include/spdk/zipf.h 00:02:14.986 CXX test/cpp_headers/accel.o 00:02:14.986 CXX test/cpp_headers/assert.o 00:02:14.986 CXX test/cpp_headers/accel_module.o 00:02:14.986 CXX test/cpp_headers/barrier.o 00:02:14.986 CXX test/cpp_headers/base64.o 00:02:14.986 CXX test/cpp_headers/bdev_module.o 00:02:14.986 CXX test/cpp_headers/bdev.o 00:02:14.986 CXX test/cpp_headers/bdev_zone.o 00:02:14.986 CXX test/cpp_headers/bit_array.o 00:02:14.987 CXX test/cpp_headers/bit_pool.o 00:02:14.987 CXX test/cpp_headers/blobfs_bdev.o 00:02:14.987 CXX test/cpp_headers/blob_bdev.o 00:02:14.987 CXX test/cpp_headers/blobfs.o 00:02:14.987 CXX test/cpp_headers/blob.o 00:02:14.987 CXX test/cpp_headers/crc16.o 00:02:14.987 CXX test/cpp_headers/cpuset.o 00:02:14.987 CXX test/cpp_headers/config.o 00:02:14.987 CXX test/cpp_headers/conf.o 00:02:14.987 CXX test/cpp_headers/crc32.o 00:02:14.987 CXX test/cpp_headers/dif.o 00:02:14.987 CXX test/cpp_headers/crc64.o 00:02:14.987 CXX test/cpp_headers/endian.o 00:02:14.987 CXX test/cpp_headers/env_dpdk.o 00:02:14.987 CXX test/cpp_headers/dma.o 00:02:14.987 CXX test/cpp_headers/event.o 00:02:14.987 CXX test/cpp_headers/env.o 00:02:14.987 CXX test/cpp_headers/fd_group.o 00:02:14.987 CXX test/cpp_headers/fd.o 00:02:14.987 CXX test/cpp_headers/file.o 00:02:14.987 CXX test/cpp_headers/ftl.o 00:02:14.987 CXX test/cpp_headers/gpt_spec.o 00:02:14.987 CXX test/cpp_headers/hexlify.o 00:02:14.987 CXX test/cpp_headers/histogram_data.o 00:02:14.987 CXX test/cpp_headers/idxd_spec.o 00:02:14.987 CXX test/cpp_headers/idxd.o 00:02:14.987 CXX test/cpp_headers/init.o 00:02:14.987 CXX test/cpp_headers/ioat.o 00:02:14.987 CXX test/cpp_headers/ioat_spec.o 00:02:14.987 CXX test/cpp_headers/json.o 00:02:14.987 CXX test/cpp_headers/iscsi_spec.o 00:02:14.987 CXX test/cpp_headers/jsonrpc.o 00:02:14.987 CXX test/cpp_headers/keyring_module.o 00:02:14.987 CXX test/cpp_headers/keyring.o 00:02:14.987 CXX test/cpp_headers/log.o 00:02:14.987 CXX test/cpp_headers/likely.o 00:02:14.987 CXX test/cpp_headers/lvol.o 00:02:14.987 CXX test/cpp_headers/mmio.o 00:02:14.987 CXX test/cpp_headers/memory.o 00:02:14.987 CXX test/cpp_headers/nbd.o 00:02:14.987 CXX test/cpp_headers/notify.o 00:02:14.987 CXX test/cpp_headers/nvme.o 00:02:14.987 CXX test/cpp_headers/nvme_intel.o 00:02:14.987 CXX test/cpp_headers/nvme_ocssd.o 00:02:14.987 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:14.987 CXX test/cpp_headers/nvme_spec.o 00:02:14.987 CXX test/cpp_headers/nvme_zns.o 00:02:14.987 CXX test/cpp_headers/nvmf_cmd.o 00:02:14.987 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:14.987 CXX test/cpp_headers/nvmf.o 00:02:14.987 CXX test/cpp_headers/nvmf_spec.o 00:02:14.987 CXX test/cpp_headers/nvmf_transport.o 00:02:14.987 CXX test/cpp_headers/opal.o 00:02:14.987 CXX test/cpp_headers/opal_spec.o 00:02:14.987 CXX test/cpp_headers/pci_ids.o 00:02:14.987 CXX test/cpp_headers/pipe.o 00:02:14.987 CXX test/cpp_headers/queue.o 00:02:14.987 CXX test/cpp_headers/reduce.o 00:02:14.987 CXX test/cpp_headers/rpc.o 00:02:14.987 CXX test/cpp_headers/scheduler.o 00:02:14.987 CXX test/cpp_headers/scsi.o 00:02:14.987 CC test/app/histogram_perf/histogram_perf.o 00:02:14.987 CC examples/accel/perf/accel_perf.o 00:02:15.260 CC test/event/event_perf/event_perf.o 00:02:15.261 CC test/app/jsoncat/jsoncat.o 00:02:15.261 CC test/event/reactor_perf/reactor_perf.o 00:02:15.261 CXX test/cpp_headers/scsi_spec.o 00:02:15.261 CC test/nvme/aer/aer.o 00:02:15.261 CC examples/idxd/perf/perf.o 00:02:15.261 CC test/nvme/err_injection/err_injection.o 00:02:15.261 CC test/app/stub/stub.o 00:02:15.261 CC examples/util/zipf/zipf.o 00:02:15.261 CC test/nvme/e2edp/nvme_dp.o 00:02:15.261 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.261 CC test/nvme/overhead/overhead.o 00:02:15.261 CC test/event/reactor/reactor.o 00:02:15.261 CC test/nvme/fused_ordering/fused_ordering.o 00:02:15.261 CC test/nvme/reserve/reserve.o 00:02:15.261 CC test/nvme/reset/reset.o 00:02:15.261 CC test/nvme/simple_copy/simple_copy.o 00:02:15.261 CC test/nvme/startup/startup.o 00:02:15.261 CC examples/vmd/led/led.o 00:02:15.261 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:15.261 CC test/env/pci/pci_ut.o 00:02:15.261 CC test/nvme/cuse/cuse.o 00:02:15.261 CC app/fio/nvme/fio_plugin.o 00:02:15.261 CC test/nvme/fdp/fdp.o 00:02:15.261 CC test/nvme/connect_stress/connect_stress.o 00:02:15.261 CC test/env/vtophys/vtophys.o 00:02:15.261 CC examples/ioat/verify/verify.o 00:02:15.261 CC test/event/app_repeat/app_repeat.o 00:02:15.261 CC test/env/memory/memory_ut.o 00:02:15.261 CC test/nvme/boot_partition/boot_partition.o 00:02:15.261 CC test/thread/poller_perf/poller_perf.o 00:02:15.261 CC test/nvme/sgl/sgl.o 00:02:15.261 CC examples/nvme/reconnect/reconnect.o 00:02:15.261 CC examples/nvme/hotplug/hotplug.o 00:02:15.261 CC examples/nvme/arbitration/arbitration.o 00:02:15.261 CC examples/sock/hello_world/hello_sock.o 00:02:15.261 CC examples/bdev/hello_world/hello_bdev.o 00:02:15.261 CC examples/nvme/hello_world/hello_world.o 00:02:15.261 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.261 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.261 CC examples/nvme/abort/abort.o 00:02:15.261 CC examples/bdev/bdevperf/bdevperf.o 00:02:15.261 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.261 CC examples/ioat/perf/perf.o 00:02:15.261 CC test/dma/test_dma/test_dma.o 00:02:15.261 CC app/fio/bdev/fio_plugin.o 00:02:15.261 CC examples/blob/cli/blobcli.o 00:02:15.261 CC examples/nvmf/nvmf/nvmf.o 00:02:15.261 CC test/nvme/compliance/nvme_compliance.o 00:02:15.261 CC test/accel/dif/dif.o 00:02:15.261 CC test/app/bdev_svc/bdev_svc.o 00:02:15.261 CC examples/blob/hello_world/hello_blob.o 00:02:15.261 CC test/blobfs/mkfs/mkfs.o 00:02:15.261 CC examples/thread/thread/thread_ex.o 00:02:15.261 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.261 CC test/bdev/bdevio/bdevio.o 00:02:15.261 CC test/event/scheduler/scheduler.o 00:02:15.261 LINK spdk_lspci 00:02:15.528 LINK rpc_client_test 00:02:15.528 LINK nvmf_tgt 00:02:15.528 LINK spdk_nvme_discover 00:02:15.528 LINK vhost 00:02:15.528 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:15.528 CC test/lvol/esnap/esnap.o 00:02:15.528 LINK interrupt_tgt 00:02:15.528 LINK spdk_tgt 00:02:15.528 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:15.528 CC test/env/mem_callbacks/mem_callbacks.o 00:02:15.786 LINK iscsi_tgt 00:02:15.786 LINK histogram_perf 00:02:15.786 LINK lsvmd 00:02:15.786 LINK spdk_trace_record 00:02:15.786 LINK vtophys 00:02:15.786 LINK jsoncat 00:02:15.786 LINK reactor_perf 00:02:15.786 LINK led 00:02:15.786 LINK event_perf 00:02:15.786 LINK reactor 00:02:15.786 LINK zipf 00:02:15.786 LINK startup 00:02:15.786 CXX test/cpp_headers/sock.o 00:02:15.786 LINK poller_perf 00:02:15.786 CXX test/cpp_headers/stdinc.o 00:02:15.786 LINK stub 00:02:15.786 CXX test/cpp_headers/string.o 00:02:15.786 CXX test/cpp_headers/thread.o 00:02:15.786 CXX test/cpp_headers/trace.o 00:02:15.786 CXX test/cpp_headers/trace_parser.o 00:02:15.786 LINK app_repeat 00:02:15.786 LINK doorbell_aers 00:02:15.786 CXX test/cpp_headers/tree.o 00:02:15.786 LINK env_dpdk_post_init 00:02:15.786 LINK err_injection 00:02:15.786 CXX test/cpp_headers/ublk.o 00:02:15.786 CXX test/cpp_headers/util.o 00:02:15.786 CXX test/cpp_headers/uuid.o 00:02:15.786 CXX test/cpp_headers/version.o 00:02:15.786 CXX test/cpp_headers/vfio_user_pci.o 00:02:15.786 CXX test/cpp_headers/vfio_user_spec.o 00:02:15.786 LINK bdev_svc 00:02:15.786 CXX test/cpp_headers/vhost.o 00:02:15.786 CXX test/cpp_headers/vmd.o 00:02:15.786 CXX test/cpp_headers/xor.o 00:02:15.786 LINK connect_stress 00:02:15.786 CXX test/cpp_headers/zipf.o 00:02:15.786 LINK fused_ordering 00:02:15.786 LINK boot_partition 00:02:15.786 LINK pmr_persistence 00:02:15.786 LINK hello_world 00:02:15.786 LINK spdk_dd 00:02:15.786 LINK reserve 00:02:15.786 LINK mkfs 00:02:15.786 LINK cmb_copy 00:02:15.786 LINK verify 00:02:15.786 LINK aer 00:02:15.786 LINK simple_copy 00:02:15.786 LINK hotplug 00:02:16.046 LINK ioat_perf 00:02:16.046 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.046 LINK scheduler 00:02:16.046 LINK hello_sock 00:02:16.046 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.046 LINK sgl 00:02:16.046 LINK nvme_dp 00:02:16.046 LINK reset 00:02:16.046 LINK hello_blob 00:02:16.046 LINK overhead 00:02:16.046 LINK hello_bdev 00:02:16.046 LINK nvmf 00:02:16.046 LINK thread 00:02:16.046 LINK nvme_compliance 00:02:16.046 LINK arbitration 00:02:16.046 LINK idxd_perf 00:02:16.046 LINK reconnect 00:02:16.046 LINK abort 00:02:16.046 LINK fdp 00:02:16.046 LINK accel_perf 00:02:16.046 LINK pci_ut 00:02:16.046 LINK dif 00:02:16.046 LINK spdk_trace 00:02:16.046 LINK test_dma 00:02:16.046 LINK bdevio 00:02:16.307 LINK blobcli 00:02:16.307 LINK nvme_fuzz 00:02:16.307 LINK spdk_nvme 00:02:16.307 LINK nvme_manage 00:02:16.307 LINK spdk_bdev 00:02:16.307 LINK spdk_nvme_identify 00:02:16.307 LINK vhost_fuzz 00:02:16.307 LINK mem_callbacks 00:02:16.569 LINK spdk_nvme_perf 00:02:16.569 LINK bdevperf 00:02:16.569 LINK spdk_top 00:02:16.569 LINK memory_ut 00:02:16.830 LINK cuse 00:02:17.090 LINK iscsi_fuzz 00:02:19.000 LINK esnap 00:02:19.573 00:02:19.573 real 0m48.728s 00:02:19.573 user 6m23.218s 00:02:19.573 sys 4m20.607s 00:02:19.573 20:15:35 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:19.573 20:15:35 make -- common/autotest_common.sh@10 -- $ set +x 00:02:19.573 ************************************ 00:02:19.573 END TEST make 00:02:19.573 ************************************ 00:02:19.573 20:15:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:19.573 20:15:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:19.573 20:15:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:19.573 20:15:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.573 20:15:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:19.573 20:15:35 -- pm/common@44 -- $ pid=2688324 00:02:19.573 20:15:35 -- pm/common@50 -- $ kill -TERM 2688324 00:02:19.573 20:15:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.573 20:15:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:19.573 20:15:35 -- pm/common@44 -- $ pid=2688326 00:02:19.573 20:15:35 -- pm/common@50 -- $ kill -TERM 2688326 00:02:19.573 20:15:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.573 20:15:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:19.573 20:15:35 -- pm/common@44 -- $ pid=2688327 00:02:19.573 20:15:35 -- pm/common@50 -- $ kill -TERM 2688327 00:02:19.573 20:15:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.573 20:15:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:19.573 20:15:35 -- pm/common@44 -- $ pid=2688356 00:02:19.573 20:15:35 -- pm/common@50 -- $ sudo -E kill -TERM 2688356 00:02:19.573 20:15:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:19.573 20:15:35 -- nvmf/common.sh@7 -- # uname -s 00:02:19.573 20:15:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:19.573 20:15:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:19.573 20:15:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:19.573 20:15:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:19.573 20:15:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:19.573 20:15:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:19.573 20:15:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:19.573 20:15:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:19.573 20:15:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:19.573 20:15:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:19.573 20:15:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:19.573 20:15:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:19.573 20:15:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:19.573 20:15:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:19.573 20:15:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:19.573 20:15:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:19.573 20:15:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:19.573 20:15:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:19.573 20:15:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.573 20:15:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.573 20:15:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.573 20:15:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.573 20:15:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.573 20:15:35 -- paths/export.sh@5 -- # export PATH 00:02:19.573 20:15:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.573 20:15:35 -- nvmf/common.sh@47 -- # : 0 00:02:19.573 20:15:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:19.573 20:15:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:19.573 20:15:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:19.573 20:15:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:19.573 20:15:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:19.573 20:15:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:19.573 20:15:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:19.573 20:15:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:19.573 20:15:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:19.573 20:15:35 -- spdk/autotest.sh@32 -- # uname -s 00:02:19.573 20:15:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:19.573 20:15:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:19.573 20:15:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.573 20:15:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:19.573 20:15:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.573 20:15:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:19.834 20:15:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:19.834 20:15:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:19.834 20:15:35 -- spdk/autotest.sh@48 -- # udevadm_pid=2750579 00:02:19.834 20:15:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:19.834 20:15:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:19.834 20:15:35 -- pm/common@17 -- # local monitor 00:02:19.834 20:15:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.834 20:15:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.834 20:15:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.834 20:15:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.834 20:15:35 -- pm/common@21 -- # date +%s 00:02:19.834 20:15:35 -- pm/common@21 -- # date +%s 00:02:19.834 20:15:35 -- pm/common@25 -- # sleep 1 00:02:19.834 20:15:35 -- pm/common@21 -- # date +%s 00:02:19.834 20:15:35 -- pm/common@21 -- # date +%s 00:02:19.834 20:15:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715624135 00:02:19.834 20:15:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715624135 00:02:19.834 20:15:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715624135 00:02:19.834 20:15:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715624135 00:02:19.834 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715624135_collect-cpu-temp.pm.log 00:02:19.834 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715624135_collect-vmstat.pm.log 00:02:19.834 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715624135_collect-cpu-load.pm.log 00:02:19.834 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715624135_collect-bmc-pm.bmc.pm.log 00:02:20.777 20:15:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:20.777 20:15:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:20.777 20:15:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:20.777 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:02:20.777 20:15:36 -- spdk/autotest.sh@59 -- # create_test_list 00:02:20.777 20:15:36 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:20.777 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:02:20.777 20:15:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:20.777 20:15:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.777 20:15:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.777 20:15:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:20.777 20:15:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.777 20:15:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:20.777 20:15:36 -- common/autotest_common.sh@1451 -- # uname 00:02:20.777 20:15:36 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:20.777 20:15:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:20.777 20:15:36 -- common/autotest_common.sh@1471 -- # uname 00:02:20.777 20:15:36 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:20.777 20:15:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:20.777 20:15:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:20.777 20:15:36 -- spdk/autotest.sh@72 -- # hash lcov 00:02:20.777 20:15:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:20.777 20:15:36 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:20.777 --rc lcov_branch_coverage=1 00:02:20.777 --rc lcov_function_coverage=1 00:02:20.777 --rc genhtml_branch_coverage=1 00:02:20.777 --rc genhtml_function_coverage=1 00:02:20.777 --rc genhtml_legend=1 00:02:20.777 --rc geninfo_all_blocks=1 00:02:20.777 ' 00:02:20.777 20:15:36 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:20.777 --rc lcov_branch_coverage=1 00:02:20.777 --rc lcov_function_coverage=1 00:02:20.777 --rc genhtml_branch_coverage=1 00:02:20.777 --rc genhtml_function_coverage=1 00:02:20.777 --rc genhtml_legend=1 00:02:20.777 --rc geninfo_all_blocks=1 00:02:20.777 ' 00:02:20.777 20:15:36 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:20.777 --rc lcov_branch_coverage=1 00:02:20.777 --rc lcov_function_coverage=1 00:02:20.777 --rc genhtml_branch_coverage=1 00:02:20.777 --rc genhtml_function_coverage=1 00:02:20.777 --rc genhtml_legend=1 00:02:20.777 --rc geninfo_all_blocks=1 00:02:20.777 --no-external' 00:02:20.777 20:15:36 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:20.777 --rc lcov_branch_coverage=1 00:02:20.777 --rc lcov_function_coverage=1 00:02:20.777 --rc genhtml_branch_coverage=1 00:02:20.777 --rc genhtml_function_coverage=1 00:02:20.777 --rc genhtml_legend=1 00:02:20.777 --rc geninfo_all_blocks=1 00:02:20.777 --no-external' 00:02:20.777 20:15:36 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:20.777 lcov: LCOV version 1.14 00:02:20.777 20:15:36 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:30.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:30.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:32.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:32.767 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:32.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:32.767 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:32.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:32.767 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:47.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:47.680 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:47.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:47.680 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:47.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:47.681 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:47.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:47.682 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:48.638 20:16:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:48.638 20:16:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:48.638 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:02:48.638 20:16:04 -- spdk/autotest.sh@91 -- # rm -f 00:02:48.638 20:16:04 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.942 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:52.203 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:52.203 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:52.463 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:52.463 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:52.463 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:52.463 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:52.725 20:16:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:52.725 20:16:08 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:52.725 20:16:08 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:52.725 20:16:08 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:52.725 20:16:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:52.725 20:16:08 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:52.725 20:16:08 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:52.725 20:16:08 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.725 20:16:08 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:52.725 20:16:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:52.725 20:16:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:52.725 20:16:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:52.725 20:16:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:52.725 20:16:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:52.725 20:16:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:52.725 No valid GPT data, bailing 00:02:52.725 20:16:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:52.725 20:16:08 -- scripts/common.sh@391 -- # pt= 00:02:52.725 20:16:08 -- scripts/common.sh@392 -- # return 1 00:02:52.725 20:16:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:52.725 1+0 records in 00:02:52.725 1+0 records out 00:02:52.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450184 s, 233 MB/s 00:02:52.725 20:16:08 -- spdk/autotest.sh@118 -- # sync 00:02:52.725 20:16:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:52.725 20:16:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:52.725 20:16:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.866 20:16:16 -- spdk/autotest.sh@124 -- # uname -s 00:03:00.866 20:16:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:00.866 20:16:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.866 20:16:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.866 20:16:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.866 20:16:16 -- common/autotest_common.sh@10 -- # set +x 00:03:00.866 ************************************ 00:03:00.866 START TEST setup.sh 00:03:00.866 ************************************ 00:03:00.866 20:16:16 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.866 * Looking for test storage... 00:03:00.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.866 20:16:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:00.866 20:16:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:00.866 20:16:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.866 20:16:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.866 20:16:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.866 20:16:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.866 ************************************ 00:03:00.866 START TEST acl 00:03:00.866 ************************************ 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.866 * Looking for test storage... 00:03:00.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.866 20:16:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.866 20:16:16 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:00.866 20:16:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:00.866 20:16:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:00.866 20:16:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:00.866 20:16:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:00.866 20:16:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:00.866 20:16:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.866 20:16:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.150 20:16:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:06.150 20:16:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:06.150 20:16:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.150 20:16:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:06.150 20:16:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.150 20:16:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.451 Hugepages 00:03:09.451 node hugesize free / total 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 00:03:09.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.451 20:16:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.451 20:16:24 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:09.451 20:16:24 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:09.451 20:16:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 ************************************ 00:03:09.451 START TEST denied 00:03:09.451 ************************************ 00:03:09.451 20:16:25 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:09.451 20:16:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:09.451 20:16:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:09.451 20:16:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:09.451 20:16:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.451 20:16:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.752 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:12.752 20:16:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:12.752 20:16:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:12.752 20:16:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:12.752 20:16:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:12.752 20:16:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:12.753 20:16:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:12.753 20:16:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:12.753 20:16:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:12.753 20:16:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.753 20:16:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.059 00:03:18.059 real 0m8.305s 00:03:18.059 user 0m2.556s 00:03:18.059 sys 0m4.814s 00:03:18.059 20:16:33 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.059 20:16:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:18.059 ************************************ 00:03:18.059 END TEST denied 00:03:18.059 ************************************ 00:03:18.059 20:16:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:18.059 20:16:33 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.059 20:16:33 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.059 20:16:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.059 ************************************ 00:03:18.059 START TEST allowed 00:03:18.059 ************************************ 00:03:18.059 20:16:33 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:18.059 20:16:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:18.059 20:16:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:18.059 20:16:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:18.059 20:16:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.059 20:16:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.412 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:23.412 20:16:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:23.412 20:16:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:23.412 20:16:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:23.412 20:16:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.412 20:16:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.702 00:03:28.702 real 0m10.228s 00:03:28.702 user 0m3.076s 00:03:28.702 sys 0m5.411s 00:03:28.702 20:16:43 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:28.702 20:16:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:28.702 ************************************ 00:03:28.702 END TEST allowed 00:03:28.702 ************************************ 00:03:28.702 00:03:28.702 real 0m26.988s 00:03:28.702 user 0m8.759s 00:03:28.702 sys 0m15.755s 00:03:28.702 20:16:43 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:28.702 20:16:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:28.702 ************************************ 00:03:28.702 END TEST acl 00:03:28.702 ************************************ 00:03:28.702 20:16:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:28.702 20:16:43 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:28.702 20:16:43 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.702 20:16:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:28.702 ************************************ 00:03:28.702 START TEST hugepages 00:03:28.702 ************************************ 00:03:28.702 20:16:43 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:28.702 * Looking for test storage... 00:03:28.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103840716 kB' 'MemAvailable: 108299816 kB' 'Buffers: 4144 kB' 'Cached: 13575536 kB' 'SwapCached: 0 kB' 'Active: 9696400 kB' 'Inactive: 4476392 kB' 'Active(anon): 9063824 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596568 kB' 'Mapped: 213000 kB' 'Shmem: 8470712 kB' 'KReclaimable: 348924 kB' 'Slab: 1194196 kB' 'SReclaimable: 348924 kB' 'SUnreclaim: 845272 kB' 'KernelStack: 27552 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460888 kB' 'Committed_AS: 10510952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237784 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.702 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.703 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:28.704 20:16:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:28.704 20:16:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:28.704 20:16:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.704 20:16:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.704 ************************************ 00:03:28.704 START TEST default_setup 00:03:28.704 ************************************ 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.704 20:16:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.021 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:32.021 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105987508 kB' 'MemAvailable: 110446592 kB' 'Buffers: 4144 kB' 'Cached: 13575656 kB' 'SwapCached: 0 kB' 'Active: 9710408 kB' 'Inactive: 4476392 kB' 'Active(anon): 9077832 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609860 kB' 'Mapped: 213232 kB' 'Shmem: 8470832 kB' 'KReclaimable: 348892 kB' 'Slab: 1192732 kB' 'SReclaimable: 348892 kB' 'SUnreclaim: 843840 kB' 'KernelStack: 27456 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10518056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237768 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.598 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105987820 kB' 'MemAvailable: 110446904 kB' 'Buffers: 4144 kB' 'Cached: 13575656 kB' 'SwapCached: 0 kB' 'Active: 9710432 kB' 'Inactive: 4476392 kB' 'Active(anon): 9077856 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609888 kB' 'Mapped: 213168 kB' 'Shmem: 8470832 kB' 'KReclaimable: 348892 kB' 'Slab: 1192732 kB' 'SReclaimable: 348892 kB' 'SUnreclaim: 843840 kB' 'KernelStack: 27440 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10518204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237768 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.599 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.600 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105988072 kB' 'MemAvailable: 110447156 kB' 'Buffers: 4144 kB' 'Cached: 13575680 kB' 'SwapCached: 0 kB' 'Active: 9709952 kB' 'Inactive: 4476392 kB' 'Active(anon): 9077376 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609804 kB' 'Mapped: 212680 kB' 'Shmem: 8470856 kB' 'KReclaimable: 348892 kB' 'Slab: 1192732 kB' 'SReclaimable: 348892 kB' 'SUnreclaim: 843840 kB' 'KernelStack: 27456 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10518364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237736 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.601 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.602 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.603 nr_hugepages=1024 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.603 resv_hugepages=0 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.603 surplus_hugepages=0 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.603 anon_hugepages=0 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105987320 kB' 'MemAvailable: 110446404 kB' 'Buffers: 4144 kB' 'Cached: 13575712 kB' 'SwapCached: 0 kB' 'Active: 9709904 kB' 'Inactive: 4476392 kB' 'Active(anon): 9077328 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609776 kB' 'Mapped: 212680 kB' 'Shmem: 8470888 kB' 'KReclaimable: 348892 kB' 'Slab: 1192732 kB' 'SReclaimable: 348892 kB' 'SUnreclaim: 843840 kB' 'KernelStack: 27472 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10518756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237752 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.603 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.604 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57890276 kB' 'MemUsed: 7768732 kB' 'SwapCached: 0 kB' 'Active: 2426252 kB' 'Inactive: 1099744 kB' 'Active(anon): 2267144 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368540 kB' 'Mapped: 46624 kB' 'AnonPages: 160700 kB' 'Shmem: 2109688 kB' 'KernelStack: 14248 kB' 'PageTables: 2908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175808 kB' 'Slab: 578560 kB' 'SReclaimable: 175808 kB' 'SUnreclaim: 402752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.605 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.606 node0=1024 expecting 1024 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.606 00:03:32.606 real 0m4.486s 00:03:32.606 user 0m1.755s 00:03:32.606 sys 0m2.758s 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:32.606 20:16:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:32.606 ************************************ 00:03:32.606 END TEST default_setup 00:03:32.606 ************************************ 00:03:32.606 20:16:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:32.606 20:16:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:32.606 20:16:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:32.606 20:16:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.606 ************************************ 00:03:32.606 START TEST per_node_1G_alloc 00:03:32.606 ************************************ 00:03:32.606 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:32.606 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:32.606 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:32.606 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.607 20:16:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.814 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:36.814 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.814 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106007900 kB' 'MemAvailable: 110466984 kB' 'Buffers: 4144 kB' 'Cached: 13575828 kB' 'SwapCached: 0 kB' 'Active: 9710584 kB' 'Inactive: 4476392 kB' 'Active(anon): 9078008 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609776 kB' 'Mapped: 211704 kB' 'Shmem: 8471004 kB' 'KReclaimable: 348892 kB' 'Slab: 1192040 kB' 'SReclaimable: 348892 kB' 'SUnreclaim: 843148 kB' 'KernelStack: 27600 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10513292 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238040 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.815 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106007648 kB' 'MemAvailable: 110466732 kB' 'Buffers: 4144 kB' 'Cached: 13575828 kB' 'SwapCached: 0 kB' 'Active: 9711672 kB' 'Inactive: 4476392 kB' 'Active(anon): 9079096 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610920 kB' 'Mapped: 211704 kB' 'Shmem: 8471004 kB' 'KReclaimable: 348892 kB' 'Slab: 1192024 kB' 'SReclaimable: 348892 kB' 'SUnreclaim: 843132 kB' 'KernelStack: 27568 kB' 'PageTables: 9304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10513460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238056 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.816 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.817 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106010056 kB' 'MemAvailable: 110469108 kB' 'Buffers: 4144 kB' 'Cached: 13575828 kB' 'SwapCached: 0 kB' 'Active: 9711156 kB' 'Inactive: 4476392 kB' 'Active(anon): 9078580 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610460 kB' 'Mapped: 211656 kB' 'Shmem: 8471004 kB' 'KReclaimable: 348828 kB' 'Slab: 1192000 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843172 kB' 'KernelStack: 27584 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10513484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238040 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.818 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.819 nr_hugepages=1024 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.819 resv_hugepages=0 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.819 surplus_hugepages=0 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.819 anon_hugepages=0 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106013800 kB' 'MemAvailable: 110472852 kB' 'Buffers: 4144 kB' 'Cached: 13575868 kB' 'SwapCached: 0 kB' 'Active: 9709976 kB' 'Inactive: 4476392 kB' 'Active(anon): 9077400 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609644 kB' 'Mapped: 211584 kB' 'Shmem: 8471044 kB' 'KReclaimable: 348828 kB' 'Slab: 1191904 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843076 kB' 'KernelStack: 27632 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10513508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238008 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.819 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.820 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58967232 kB' 'MemUsed: 6691776 kB' 'SwapCached: 0 kB' 'Active: 2428464 kB' 'Inactive: 1099744 kB' 'Active(anon): 2269356 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368672 kB' 'Mapped: 46660 kB' 'AnonPages: 162776 kB' 'Shmem: 2109820 kB' 'KernelStack: 14280 kB' 'PageTables: 2720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175744 kB' 'Slab: 577660 kB' 'SReclaimable: 175744 kB' 'SUnreclaim: 401916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.821 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 47048588 kB' 'MemUsed: 13631280 kB' 'SwapCached: 0 kB' 'Active: 7281656 kB' 'Inactive: 3376648 kB' 'Active(anon): 6808188 kB' 'Inactive(anon): 0 kB' 'Active(file): 473468 kB' 'Inactive(file): 3376648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10211344 kB' 'Mapped: 164916 kB' 'AnonPages: 447012 kB' 'Shmem: 6361228 kB' 'KernelStack: 13256 kB' 'PageTables: 6408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173084 kB' 'Slab: 614244 kB' 'SReclaimable: 173084 kB' 'SUnreclaim: 441160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.822 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.823 node0=512 expecting 512 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:36.823 node1=512 expecting 512 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.823 00:03:36.823 real 0m4.029s 00:03:36.823 user 0m1.550s 00:03:36.823 sys 0m2.501s 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:36.823 20:16:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.823 ************************************ 00:03:36.823 END TEST per_node_1G_alloc 00:03:36.823 ************************************ 00:03:36.823 20:16:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:36.823 20:16:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:36.823 20:16:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:36.823 20:16:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.823 ************************************ 00:03:36.823 START TEST even_2G_alloc 00:03:36.823 ************************************ 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:36.823 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.824 20:16:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.034 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.034 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106000320 kB' 'MemAvailable: 110459372 kB' 'Buffers: 4144 kB' 'Cached: 13576032 kB' 'SwapCached: 0 kB' 'Active: 9710864 kB' 'Inactive: 4476392 kB' 'Active(anon): 9078288 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610268 kB' 'Mapped: 211648 kB' 'Shmem: 8471208 kB' 'KReclaimable: 348828 kB' 'Slab: 1191812 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 842984 kB' 'KernelStack: 27360 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10511604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237912 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.034 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.035 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105999564 kB' 'MemAvailable: 110458616 kB' 'Buffers: 4144 kB' 'Cached: 13576032 kB' 'SwapCached: 0 kB' 'Active: 9711360 kB' 'Inactive: 4476392 kB' 'Active(anon): 9078784 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610788 kB' 'Mapped: 211648 kB' 'Shmem: 8471208 kB' 'KReclaimable: 348828 kB' 'Slab: 1191812 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 842984 kB' 'KernelStack: 27360 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10511620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237928 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.036 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.037 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105999312 kB' 'MemAvailable: 110458364 kB' 'Buffers: 4144 kB' 'Cached: 13576036 kB' 'SwapCached: 0 kB' 'Active: 9711232 kB' 'Inactive: 4476392 kB' 'Active(anon): 9078656 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610640 kB' 'Mapped: 211584 kB' 'Shmem: 8471212 kB' 'KReclaimable: 348828 kB' 'Slab: 1191940 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843112 kB' 'KernelStack: 27392 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10511644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237928 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.038 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.039 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.040 nr_hugepages=1024 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.040 resv_hugepages=0 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.040 surplus_hugepages=0 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.040 anon_hugepages=0 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105999312 kB' 'MemAvailable: 110458364 kB' 'Buffers: 4144 kB' 'Cached: 13576036 kB' 'SwapCached: 0 kB' 'Active: 9710864 kB' 'Inactive: 4476392 kB' 'Active(anon): 9078288 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610272 kB' 'Mapped: 211584 kB' 'Shmem: 8471212 kB' 'KReclaimable: 348828 kB' 'Slab: 1191940 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843112 kB' 'KernelStack: 27376 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10511664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237928 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.040 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.041 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58951612 kB' 'MemUsed: 6707396 kB' 'SwapCached: 0 kB' 'Active: 2426812 kB' 'Inactive: 1099744 kB' 'Active(anon): 2267704 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368760 kB' 'Mapped: 46676 kB' 'AnonPages: 160868 kB' 'Shmem: 2109908 kB' 'KernelStack: 14168 kB' 'PageTables: 2696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175744 kB' 'Slab: 577516 kB' 'SReclaimable: 175744 kB' 'SUnreclaim: 401772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.042 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.043 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 47048204 kB' 'MemUsed: 13631664 kB' 'SwapCached: 0 kB' 'Active: 7284696 kB' 'Inactive: 3376648 kB' 'Active(anon): 6811228 kB' 'Inactive(anon): 0 kB' 'Active(file): 473468 kB' 'Inactive(file): 3376648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10211420 kB' 'Mapped: 164908 kB' 'AnonPages: 450048 kB' 'Shmem: 6361304 kB' 'KernelStack: 13192 kB' 'PageTables: 6152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173084 kB' 'Slab: 614424 kB' 'SReclaimable: 173084 kB' 'SUnreclaim: 441340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.044 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.045 node0=512 expecting 512 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:41.045 node1=512 expecting 512 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.045 00:03:41.045 real 0m4.292s 00:03:41.045 user 0m1.617s 00:03:41.045 sys 0m2.728s 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.045 20:16:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.045 ************************************ 00:03:41.045 END TEST even_2G_alloc 00:03:41.045 ************************************ 00:03:41.045 20:16:56 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:41.045 20:16:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.045 20:16:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.045 20:16:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.305 ************************************ 00:03:41.305 START TEST odd_alloc 00:03:41.305 ************************************ 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.305 20:16:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.513 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:45.513 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106032168 kB' 'MemAvailable: 110491220 kB' 'Buffers: 4144 kB' 'Cached: 13576212 kB' 'SwapCached: 0 kB' 'Active: 9712528 kB' 'Inactive: 4476392 kB' 'Active(anon): 9079952 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611392 kB' 'Mapped: 211620 kB' 'Shmem: 8471388 kB' 'KReclaimable: 348828 kB' 'Slab: 1191900 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843072 kB' 'KernelStack: 27408 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 10513816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237944 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.513 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106033148 kB' 'MemAvailable: 110492200 kB' 'Buffers: 4144 kB' 'Cached: 13576212 kB' 'SwapCached: 0 kB' 'Active: 9712220 kB' 'Inactive: 4476392 kB' 'Active(anon): 9079644 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611588 kB' 'Mapped: 211556 kB' 'Shmem: 8471388 kB' 'KReclaimable: 348828 kB' 'Slab: 1191900 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843072 kB' 'KernelStack: 27280 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 10513832 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237864 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.514 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.515 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106033652 kB' 'MemAvailable: 110492704 kB' 'Buffers: 4144 kB' 'Cached: 13576212 kB' 'SwapCached: 0 kB' 'Active: 9711864 kB' 'Inactive: 4476392 kB' 'Active(anon): 9079288 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611256 kB' 'Mapped: 211556 kB' 'Shmem: 8471388 kB' 'KReclaimable: 348828 kB' 'Slab: 1191900 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843072 kB' 'KernelStack: 27408 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 10515484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237912 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.516 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.517 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.518 nr_hugepages=1025 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.518 resv_hugepages=0 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.518 surplus_hugepages=0 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.518 anon_hugepages=0 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106033088 kB' 'MemAvailable: 110492140 kB' 'Buffers: 4144 kB' 'Cached: 13576244 kB' 'SwapCached: 0 kB' 'Active: 9712352 kB' 'Inactive: 4476392 kB' 'Active(anon): 9079776 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 611632 kB' 'Mapped: 211620 kB' 'Shmem: 8471420 kB' 'KReclaimable: 348828 kB' 'Slab: 1191912 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843084 kB' 'KernelStack: 27504 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 10515508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237960 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.518 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.519 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58985372 kB' 'MemUsed: 6673636 kB' 'SwapCached: 0 kB' 'Active: 2427192 kB' 'Inactive: 1099744 kB' 'Active(anon): 2268084 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368904 kB' 'Mapped: 46696 kB' 'AnonPages: 161168 kB' 'Shmem: 2110052 kB' 'KernelStack: 14184 kB' 'PageTables: 2760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175744 kB' 'Slab: 577508 kB' 'SReclaimable: 175744 kB' 'SUnreclaim: 401764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.520 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 47050272 kB' 'MemUsed: 13629596 kB' 'SwapCached: 0 kB' 'Active: 7285008 kB' 'Inactive: 3376648 kB' 'Active(anon): 6811540 kB' 'Inactive(anon): 0 kB' 'Active(file): 473468 kB' 'Inactive(file): 3376648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10211484 kB' 'Mapped: 164912 kB' 'AnonPages: 450256 kB' 'Shmem: 6361368 kB' 'KernelStack: 13240 kB' 'PageTables: 6408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173084 kB' 'Slab: 614404 kB' 'SReclaimable: 173084 kB' 'SUnreclaim: 441320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.521 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.522 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:45.523 node0=512 expecting 513 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:45.523 node1=513 expecting 512 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:45.523 00:03:45.523 real 0m4.065s 00:03:45.523 user 0m1.635s 00:03:45.523 sys 0m2.473s 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.523 20:17:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.523 ************************************ 00:03:45.523 END TEST odd_alloc 00:03:45.523 ************************************ 00:03:45.523 20:17:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.523 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.523 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.523 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.523 ************************************ 00:03:45.523 START TEST custom_alloc 00:03:45.523 ************************************ 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:45.523 20:17:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.524 20:17:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.524 20:17:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.825 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:48.825 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.825 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105006432 kB' 'MemAvailable: 109465484 kB' 'Buffers: 4144 kB' 'Cached: 13576380 kB' 'SwapCached: 0 kB' 'Active: 9713348 kB' 'Inactive: 4476392 kB' 'Active(anon): 9080772 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612468 kB' 'Mapped: 211696 kB' 'Shmem: 8471556 kB' 'KReclaimable: 348828 kB' 'Slab: 1191700 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 842872 kB' 'KernelStack: 27520 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 10516260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238104 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.826 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.827 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105007096 kB' 'MemAvailable: 109466148 kB' 'Buffers: 4144 kB' 'Cached: 13576384 kB' 'SwapCached: 0 kB' 'Active: 9713384 kB' 'Inactive: 4476392 kB' 'Active(anon): 9080808 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612492 kB' 'Mapped: 211632 kB' 'Shmem: 8471560 kB' 'KReclaimable: 348828 kB' 'Slab: 1191720 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 842892 kB' 'KernelStack: 27648 kB' 'PageTables: 9616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 10516276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238104 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.094 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.095 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105008032 kB' 'MemAvailable: 109467084 kB' 'Buffers: 4144 kB' 'Cached: 13576404 kB' 'SwapCached: 0 kB' 'Active: 9713100 kB' 'Inactive: 4476392 kB' 'Active(anon): 9080524 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612132 kB' 'Mapped: 211632 kB' 'Shmem: 8471580 kB' 'KReclaimable: 348828 kB' 'Slab: 1191944 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843116 kB' 'KernelStack: 27520 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 10514664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238024 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.096 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:49.097 nr_hugepages=1536 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.097 resv_hugepages=0 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.097 surplus_hugepages=0 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.097 anon_hugepages=0 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105005800 kB' 'MemAvailable: 109464852 kB' 'Buffers: 4144 kB' 'Cached: 13576424 kB' 'SwapCached: 0 kB' 'Active: 9713624 kB' 'Inactive: 4476392 kB' 'Active(anon): 9081048 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612668 kB' 'Mapped: 211632 kB' 'Shmem: 8471600 kB' 'KReclaimable: 348828 kB' 'Slab: 1191944 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843116 kB' 'KernelStack: 27488 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 10514688 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238072 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.097 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.098 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.099 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58982724 kB' 'MemUsed: 6676284 kB' 'SwapCached: 0 kB' 'Active: 2427972 kB' 'Inactive: 1099744 kB' 'Active(anon): 2268864 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3369068 kB' 'Mapped: 46712 kB' 'AnonPages: 161784 kB' 'Shmem: 2110216 kB' 'KernelStack: 14248 kB' 'PageTables: 2948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175744 kB' 'Slab: 577488 kB' 'SReclaimable: 175744 kB' 'SUnreclaim: 401744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.100 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 46021572 kB' 'MemUsed: 14658296 kB' 'SwapCached: 0 kB' 'Active: 7285252 kB' 'Inactive: 3376648 kB' 'Active(anon): 6811784 kB' 'Inactive(anon): 0 kB' 'Active(file): 473468 kB' 'Inactive(file): 3376648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10211520 kB' 'Mapped: 164920 kB' 'AnonPages: 450468 kB' 'Shmem: 6361404 kB' 'KernelStack: 13288 kB' 'PageTables: 6468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173084 kB' 'Slab: 614456 kB' 'SReclaimable: 173084 kB' 'SUnreclaim: 441372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.101 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:49.102 node0=512 expecting 512 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:49.102 node1=1024 expecting 1024 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:49.102 00:03:49.102 real 0m3.765s 00:03:49.102 user 0m1.466s 00:03:49.102 sys 0m2.326s 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.102 20:17:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.102 ************************************ 00:03:49.102 END TEST custom_alloc 00:03:49.102 ************************************ 00:03:49.102 20:17:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:49.102 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.102 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.102 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.102 ************************************ 00:03:49.102 START TEST no_shrink_alloc 00:03:49.102 ************************************ 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:49.102 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.103 20:17:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.316 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:53.316 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106056260 kB' 'MemAvailable: 110515312 kB' 'Buffers: 4144 kB' 'Cached: 13576572 kB' 'SwapCached: 0 kB' 'Active: 9715184 kB' 'Inactive: 4476392 kB' 'Active(anon): 9082608 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613652 kB' 'Mapped: 212296 kB' 'Shmem: 8471748 kB' 'KReclaimable: 348828 kB' 'Slab: 1192004 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843176 kB' 'KernelStack: 27392 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10515364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237864 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.317 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106057016 kB' 'MemAvailable: 110516068 kB' 'Buffers: 4144 kB' 'Cached: 13576572 kB' 'SwapCached: 0 kB' 'Active: 9716364 kB' 'Inactive: 4476392 kB' 'Active(anon): 9083788 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614808 kB' 'Mapped: 212232 kB' 'Shmem: 8471748 kB' 'KReclaimable: 348828 kB' 'Slab: 1192004 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843176 kB' 'KernelStack: 27360 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10516704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237800 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.318 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106056032 kB' 'MemAvailable: 110515084 kB' 'Buffers: 4144 kB' 'Cached: 13576592 kB' 'SwapCached: 0 kB' 'Active: 9719912 kB' 'Inactive: 4476392 kB' 'Active(anon): 9087336 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618896 kB' 'Mapped: 212504 kB' 'Shmem: 8471768 kB' 'KReclaimable: 348828 kB' 'Slab: 1192016 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843188 kB' 'KernelStack: 27392 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10520300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.319 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.320 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.321 nr_hugepages=1024 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.321 resv_hugepages=0 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.321 surplus_hugepages=0 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.321 anon_hugepages=0 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106055528 kB' 'MemAvailable: 110514580 kB' 'Buffers: 4144 kB' 'Cached: 13576616 kB' 'SwapCached: 0 kB' 'Active: 9714380 kB' 'Inactive: 4476392 kB' 'Active(anon): 9081804 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613356 kB' 'Mapped: 212000 kB' 'Shmem: 8471792 kB' 'KReclaimable: 348828 kB' 'Slab: 1192016 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843188 kB' 'KernelStack: 27392 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10514204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237784 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:53.321 20:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.321 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57933040 kB' 'MemUsed: 7725968 kB' 'SwapCached: 0 kB' 'Active: 2429456 kB' 'Inactive: 1099744 kB' 'Active(anon): 2270348 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3369204 kB' 'Mapped: 46932 kB' 'AnonPages: 162768 kB' 'Shmem: 2110352 kB' 'KernelStack: 14264 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175744 kB' 'Slab: 577608 kB' 'SReclaimable: 175744 kB' 'SUnreclaim: 401864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.322 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.323 node0=1024 expecting 1024 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.323 20:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.535 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.535 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.535 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106080704 kB' 'MemAvailable: 110539756 kB' 'Buffers: 4144 kB' 'Cached: 13576732 kB' 'SwapCached: 0 kB' 'Active: 9715256 kB' 'Inactive: 4476392 kB' 'Active(anon): 9082680 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614516 kB' 'Mapped: 211736 kB' 'Shmem: 8471908 kB' 'KReclaimable: 348828 kB' 'Slab: 1191844 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843016 kB' 'KernelStack: 27520 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10516376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238136 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.535 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.536 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106082316 kB' 'MemAvailable: 110541368 kB' 'Buffers: 4144 kB' 'Cached: 13576732 kB' 'SwapCached: 0 kB' 'Active: 9716232 kB' 'Inactive: 4476392 kB' 'Active(anon): 9083656 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615124 kB' 'Mapped: 211772 kB' 'Shmem: 8471908 kB' 'KReclaimable: 348828 kB' 'Slab: 1191900 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843072 kB' 'KernelStack: 27584 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10516396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238088 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.537 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.538 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106084232 kB' 'MemAvailable: 110543284 kB' 'Buffers: 4144 kB' 'Cached: 13576732 kB' 'SwapCached: 0 kB' 'Active: 9715912 kB' 'Inactive: 4476392 kB' 'Active(anon): 9083336 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614856 kB' 'Mapped: 211772 kB' 'Shmem: 8471908 kB' 'KReclaimable: 348828 kB' 'Slab: 1191900 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843072 kB' 'KernelStack: 27584 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10518048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238088 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.539 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.540 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.541 nr_hugepages=1024 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.541 resv_hugepages=0 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.541 surplus_hugepages=0 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.541 anon_hugepages=0 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 106082848 kB' 'MemAvailable: 110541900 kB' 'Buffers: 4144 kB' 'Cached: 13576732 kB' 'SwapCached: 0 kB' 'Active: 9716496 kB' 'Inactive: 4476392 kB' 'Active(anon): 9083920 kB' 'Inactive(anon): 0 kB' 'Active(file): 632576 kB' 'Inactive(file): 4476392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615308 kB' 'Mapped: 211736 kB' 'Shmem: 8471908 kB' 'KReclaimable: 348828 kB' 'Slab: 1191872 kB' 'SReclaimable: 348828 kB' 'SUnreclaim: 843044 kB' 'KernelStack: 27552 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 10518072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238072 kB' 'VmallocChunk: 0 kB' 'Percpu: 116928 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4291956 kB' 'DirectMap2M: 47816704 kB' 'DirectMap1G: 83886080 kB' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.541 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.542 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57948844 kB' 'MemUsed: 7710164 kB' 'SwapCached: 0 kB' 'Active: 2427508 kB' 'Inactive: 1099744 kB' 'Active(anon): 2268400 kB' 'Inactive(anon): 0 kB' 'Active(file): 159108 kB' 'Inactive(file): 1099744 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3369308 kB' 'Mapped: 46780 kB' 'AnonPages: 161136 kB' 'Shmem: 2110456 kB' 'KernelStack: 14184 kB' 'PageTables: 2748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175744 kB' 'Slab: 577532 kB' 'SReclaimable: 175744 kB' 'SUnreclaim: 401788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.543 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.544 node0=1024 expecting 1024 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.544 00:03:57.544 real 0m8.035s 00:03:57.544 user 0m3.036s 00:03:57.544 sys 0m5.083s 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:57.544 20:17:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.544 ************************************ 00:03:57.544 END TEST no_shrink_alloc 00:03:57.544 ************************************ 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:57.544 20:17:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:57.544 00:03:57.544 real 0m29.330s 00:03:57.544 user 0m11.311s 00:03:57.544 sys 0m18.291s 00:03:57.544 20:17:13 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:57.544 20:17:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.544 ************************************ 00:03:57.544 END TEST hugepages 00:03:57.544 ************************************ 00:03:57.544 20:17:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:57.544 20:17:13 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.544 20:17:13 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.544 20:17:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.544 ************************************ 00:03:57.544 START TEST driver 00:03:57.544 ************************************ 00:03:57.544 20:17:13 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:57.544 * Looking for test storage... 00:03:57.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:57.544 20:17:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:57.544 20:17:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.544 20:17:13 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.927 20:17:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:02.927 20:17:18 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.927 20:17:18 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.927 20:17:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:02.927 ************************************ 00:04:02.927 START TEST guess_driver 00:04:02.927 ************************************ 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:02.927 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:02.927 Looking for driver=vfio-pci 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.927 20:17:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.226 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.226 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.226 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.226 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.226 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.226 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.486 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.487 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.746 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:06.746 20:17:22 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:06.746 20:17:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.746 20:17:22 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.325 00:04:13.325 real 0m9.385s 00:04:13.325 user 0m3.120s 00:04:13.325 sys 0m5.457s 00:04:13.325 20:17:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.325 20:17:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.325 ************************************ 00:04:13.325 END TEST guess_driver 00:04:13.325 ************************************ 00:04:13.325 00:04:13.325 real 0m14.863s 00:04:13.325 user 0m4.812s 00:04:13.325 sys 0m8.455s 00:04:13.325 20:17:28 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.325 20:17:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.325 ************************************ 00:04:13.325 END TEST driver 00:04:13.325 ************************************ 00:04:13.325 20:17:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.325 20:17:28 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.325 20:17:28 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.325 20:17:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.325 ************************************ 00:04:13.325 START TEST devices 00:04:13.325 ************************************ 00:04:13.325 20:17:28 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.325 * Looking for test storage... 00:04:13.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.325 20:17:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.325 20:17:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.325 20:17:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.325 20:17:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.625 20:17:32 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:16.625 20:17:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.625 20:17:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:16.625 20:17:32 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.886 No valid GPT data, bailing 00:04:16.886 20:17:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.886 20:17:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:16.886 20:17:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.886 20:17:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.886 20:17:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.886 20:17:32 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.886 20:17:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.886 20:17:32 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.886 20:17:32 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.886 20:17:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.886 ************************************ 00:04:16.886 START TEST nvme_mount 00:04:16.886 ************************************ 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.886 20:17:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.827 Creating new GPT entries in memory. 00:04:17.827 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.827 other utilities. 00:04:17.827 20:17:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.827 20:17:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.827 20:17:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.827 20:17:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.827 20:17:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.770 Creating new GPT entries in memory. 00:04:18.770 The operation has completed successfully. 00:04:18.770 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.770 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.770 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2793765 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.031 20:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.333 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.904 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.904 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.165 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.165 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.165 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.165 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.165 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.374 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.374 20:17:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.673 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.245 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.245 00:04:31.245 real 0m14.301s 00:04:31.245 user 0m4.550s 00:04:31.245 sys 0m7.565s 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.245 20:17:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:31.245 ************************************ 00:04:31.245 END TEST nvme_mount 00:04:31.245 ************************************ 00:04:31.245 20:17:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:31.245 20:17:46 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.245 20:17:46 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.245 20:17:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.245 ************************************ 00:04:31.245 START TEST dm_mount 00:04:31.245 ************************************ 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:31.245 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:31.246 20:17:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:32.187 Creating new GPT entries in memory. 00:04:32.187 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.187 other utilities. 00:04:32.187 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.187 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.187 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.187 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.187 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:33.130 Creating new GPT entries in memory. 00:04:33.130 The operation has completed successfully. 00:04:33.391 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.391 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.391 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.391 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.391 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:34.333 The operation has completed successfully. 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2799347 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.333 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.538 20:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:38.538 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.539 20:17:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.841 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:42.102 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:42.102 00:04:42.102 real 0m10.914s 00:04:42.102 user 0m2.909s 00:04:42.102 sys 0m4.996s 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.102 20:17:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.102 ************************************ 00:04:42.102 END TEST dm_mount 00:04:42.102 ************************************ 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.102 20:17:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.362 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:42.362 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:42.362 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.362 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.362 20:17:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.362 00:04:42.362 real 0m30.177s 00:04:42.362 user 0m9.270s 00:04:42.362 sys 0m15.578s 00:04:42.362 20:17:58 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.362 20:17:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.362 ************************************ 00:04:42.362 END TEST devices 00:04:42.362 ************************************ 00:04:42.623 00:04:42.623 real 1m41.780s 00:04:42.623 user 0m34.308s 00:04:42.623 sys 0m58.358s 00:04:42.623 20:17:58 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.623 20:17:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.623 ************************************ 00:04:42.623 END TEST setup.sh 00:04:42.623 ************************************ 00:04:42.623 20:17:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:46.917 Hugepages 00:04:46.917 node hugesize free / total 00:04:46.917 node0 1048576kB 0 / 0 00:04:46.917 node0 2048kB 2048 / 2048 00:04:46.917 node1 1048576kB 0 / 0 00:04:46.917 node1 2048kB 0 / 0 00:04:46.917 00:04:46.917 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:46.917 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:46.917 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:46.917 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:46.917 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:46.917 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:46.917 20:18:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:46.917 20:18:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:46.917 20:18:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:46.917 20:18:02 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.216 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:50.216 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:50.475 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:50.475 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.387 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:52.387 20:18:08 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:53.329 20:18:09 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:53.329 20:18:09 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:53.329 20:18:09 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:53.329 20:18:09 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:53.329 20:18:09 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:53.329 20:18:09 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:53.329 20:18:09 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:53.329 20:18:09 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:53.329 20:18:09 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:53.589 20:18:09 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:53.589 20:18:09 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:04:53.589 20:18:09 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.799 Waiting for block devices as requested 00:04:57.799 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:57.799 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:58.059 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:58.059 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:58.320 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:58.320 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:58.320 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:58.320 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:58.581 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:58.581 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:58.581 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:58.843 20:18:14 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:58.843 20:18:14 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:04:58.843 20:18:14 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:58.843 20:18:14 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:58.843 20:18:14 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:58.843 20:18:14 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:58.843 20:18:14 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:04:58.843 20:18:14 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:58.843 20:18:14 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:58.843 20:18:14 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:58.843 20:18:14 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:58.843 20:18:14 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:58.843 20:18:14 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:58.843 20:18:14 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:58.843 20:18:14 -- common/autotest_common.sh@1553 -- # continue 00:04:58.843 20:18:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:58.843 20:18:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.843 20:18:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.843 20:18:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:58.843 20:18:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:58.843 20:18:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.843 20:18:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.051 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:03.051 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:03.052 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:03.052 20:18:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:03.052 20:18:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.052 20:18:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.312 20:18:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:03.312 20:18:18 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:03.312 20:18:18 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.312 20:18:19 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:03.312 20:18:19 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:03.312 20:18:19 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:03.312 20:18:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:03.312 20:18:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:03.312 20:18:19 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.312 20:18:19 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.312 20:18:19 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:03.312 20:18:19 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:03.312 20:18:19 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:03.312 20:18:19 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:03.312 20:18:19 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:03.312 20:18:19 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:05:03.312 20:18:19 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:03.312 20:18:19 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:03.312 20:18:19 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:03.312 20:18:19 -- common/autotest_common.sh@1589 -- # return 0 00:05:03.312 20:18:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:03.312 20:18:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:03.312 20:18:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:03.312 20:18:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:03.312 20:18:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:03.312 20:18:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:03.312 20:18:19 -- common/autotest_common.sh@10 -- # set +x 00:05:03.312 20:18:19 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:03.312 20:18:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.312 20:18:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.312 20:18:19 -- common/autotest_common.sh@10 -- # set +x 00:05:03.312 ************************************ 00:05:03.312 START TEST env 00:05:03.312 ************************************ 00:05:03.312 20:18:19 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:03.313 * Looking for test storage... 00:05:03.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:03.573 20:18:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.573 20:18:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.573 20:18:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.573 20:18:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.573 ************************************ 00:05:03.573 START TEST env_memory 00:05:03.573 ************************************ 00:05:03.573 20:18:19 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.573 00:05:03.573 00:05:03.573 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.573 http://cunit.sourceforge.net/ 00:05:03.573 00:05:03.573 00:05:03.573 Suite: memory 00:05:03.573 Test: alloc and free memory map ...[2024-05-13 20:18:19.354587] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.573 passed 00:05:03.573 Test: mem map translation ...[2024-05-13 20:18:19.380405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.573 [2024-05-13 20:18:19.380424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.573 [2024-05-13 20:18:19.380469] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.573 [2024-05-13 20:18:19.380476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.573 passed 00:05:03.573 Test: mem map registration ...[2024-05-13 20:18:19.438233] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:03.573 [2024-05-13 20:18:19.438247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:03.573 passed 00:05:03.573 Test: mem map adjacent registrations ...passed 00:05:03.573 00:05:03.573 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.573 suites 1 1 n/a 0 0 00:05:03.573 tests 4 4 4 0 0 00:05:03.573 asserts 152 152 152 0 n/a 00:05:03.573 00:05:03.573 Elapsed time = 0.201 seconds 00:05:03.573 00:05:03.573 real 0m0.212s 00:05:03.573 user 0m0.202s 00:05:03.573 sys 0m0.010s 00:05:03.573 20:18:19 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.573 20:18:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.573 ************************************ 00:05:03.573 END TEST env_memory 00:05:03.573 ************************************ 00:05:03.835 20:18:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.835 20:18:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.835 20:18:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.835 20:18:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.835 ************************************ 00:05:03.835 START TEST env_vtophys 00:05:03.835 ************************************ 00:05:03.835 20:18:19 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.835 EAL: lib.eal log level changed from notice to debug 00:05:03.835 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.835 EAL: Detected lcore 1 as core 1 on socket 0 00:05:03.835 EAL: Detected lcore 2 as core 2 on socket 0 00:05:03.835 EAL: Detected lcore 3 as core 3 on socket 0 00:05:03.835 EAL: Detected lcore 4 as core 4 on socket 0 00:05:03.835 EAL: Detected lcore 5 as core 5 on socket 0 00:05:03.835 EAL: Detected lcore 6 as core 6 on socket 0 00:05:03.835 EAL: Detected lcore 7 as core 7 on socket 0 00:05:03.835 EAL: Detected lcore 8 as core 8 on socket 0 00:05:03.835 EAL: Detected lcore 9 as core 9 on socket 0 00:05:03.835 EAL: Detected lcore 10 as core 10 on socket 0 00:05:03.835 EAL: Detected lcore 11 as core 11 on socket 0 00:05:03.835 EAL: Detected lcore 12 as core 12 on socket 0 00:05:03.835 EAL: Detected lcore 13 as core 13 on socket 0 00:05:03.835 EAL: Detected lcore 14 as core 14 on socket 0 00:05:03.835 EAL: Detected lcore 15 as core 15 on socket 0 00:05:03.835 EAL: Detected lcore 16 as core 16 on socket 0 00:05:03.835 EAL: Detected lcore 17 as core 17 on socket 0 00:05:03.835 EAL: Detected lcore 18 as core 18 on socket 0 00:05:03.835 EAL: Detected lcore 19 as core 19 on socket 0 00:05:03.835 EAL: Detected lcore 20 as core 20 on socket 0 00:05:03.835 EAL: Detected lcore 21 as core 21 on socket 0 00:05:03.835 EAL: Detected lcore 22 as core 22 on socket 0 00:05:03.835 EAL: Detected lcore 23 as core 23 on socket 0 00:05:03.835 EAL: Detected lcore 24 as core 24 on socket 0 00:05:03.835 EAL: Detected lcore 25 as core 25 on socket 0 00:05:03.835 EAL: Detected lcore 26 as core 26 on socket 0 00:05:03.835 EAL: Detected lcore 27 as core 27 on socket 0 00:05:03.835 EAL: Detected lcore 28 as core 28 on socket 0 00:05:03.835 EAL: Detected lcore 29 as core 29 on socket 0 00:05:03.835 EAL: Detected lcore 30 as core 30 on socket 0 00:05:03.835 EAL: Detected lcore 31 as core 31 on socket 0 00:05:03.835 EAL: Detected lcore 32 as core 32 on socket 0 00:05:03.835 EAL: Detected lcore 33 as core 33 on socket 0 00:05:03.835 EAL: Detected lcore 34 as core 34 on socket 0 00:05:03.835 EAL: Detected lcore 35 as core 35 on socket 0 00:05:03.835 EAL: Detected lcore 36 as core 0 on socket 1 00:05:03.835 EAL: Detected lcore 37 as core 1 on socket 1 00:05:03.835 EAL: Detected lcore 38 as core 2 on socket 1 00:05:03.835 EAL: Detected lcore 39 as core 3 on socket 1 00:05:03.835 EAL: Detected lcore 40 as core 4 on socket 1 00:05:03.835 EAL: Detected lcore 41 as core 5 on socket 1 00:05:03.835 EAL: Detected lcore 42 as core 6 on socket 1 00:05:03.835 EAL: Detected lcore 43 as core 7 on socket 1 00:05:03.835 EAL: Detected lcore 44 as core 8 on socket 1 00:05:03.835 EAL: Detected lcore 45 as core 9 on socket 1 00:05:03.835 EAL: Detected lcore 46 as core 10 on socket 1 00:05:03.835 EAL: Detected lcore 47 as core 11 on socket 1 00:05:03.835 EAL: Detected lcore 48 as core 12 on socket 1 00:05:03.835 EAL: Detected lcore 49 as core 13 on socket 1 00:05:03.835 EAL: Detected lcore 50 as core 14 on socket 1 00:05:03.835 EAL: Detected lcore 51 as core 15 on socket 1 00:05:03.835 EAL: Detected lcore 52 as core 16 on socket 1 00:05:03.835 EAL: Detected lcore 53 as core 17 on socket 1 00:05:03.835 EAL: Detected lcore 54 as core 18 on socket 1 00:05:03.835 EAL: Detected lcore 55 as core 19 on socket 1 00:05:03.835 EAL: Detected lcore 56 as core 20 on socket 1 00:05:03.835 EAL: Detected lcore 57 as core 21 on socket 1 00:05:03.835 EAL: Detected lcore 58 as core 22 on socket 1 00:05:03.835 EAL: Detected lcore 59 as core 23 on socket 1 00:05:03.835 EAL: Detected lcore 60 as core 24 on socket 1 00:05:03.835 EAL: Detected lcore 61 as core 25 on socket 1 00:05:03.835 EAL: Detected lcore 62 as core 26 on socket 1 00:05:03.835 EAL: Detected lcore 63 as core 27 on socket 1 00:05:03.835 EAL: Detected lcore 64 as core 28 on socket 1 00:05:03.835 EAL: Detected lcore 65 as core 29 on socket 1 00:05:03.835 EAL: Detected lcore 66 as core 30 on socket 1 00:05:03.835 EAL: Detected lcore 67 as core 31 on socket 1 00:05:03.835 EAL: Detected lcore 68 as core 32 on socket 1 00:05:03.835 EAL: Detected lcore 69 as core 33 on socket 1 00:05:03.835 EAL: Detected lcore 70 as core 34 on socket 1 00:05:03.835 EAL: Detected lcore 71 as core 35 on socket 1 00:05:03.835 EAL: Detected lcore 72 as core 0 on socket 0 00:05:03.835 EAL: Detected lcore 73 as core 1 on socket 0 00:05:03.835 EAL: Detected lcore 74 as core 2 on socket 0 00:05:03.835 EAL: Detected lcore 75 as core 3 on socket 0 00:05:03.835 EAL: Detected lcore 76 as core 4 on socket 0 00:05:03.835 EAL: Detected lcore 77 as core 5 on socket 0 00:05:03.835 EAL: Detected lcore 78 as core 6 on socket 0 00:05:03.835 EAL: Detected lcore 79 as core 7 on socket 0 00:05:03.835 EAL: Detected lcore 80 as core 8 on socket 0 00:05:03.835 EAL: Detected lcore 81 as core 9 on socket 0 00:05:03.835 EAL: Detected lcore 82 as core 10 on socket 0 00:05:03.835 EAL: Detected lcore 83 as core 11 on socket 0 00:05:03.835 EAL: Detected lcore 84 as core 12 on socket 0 00:05:03.835 EAL: Detected lcore 85 as core 13 on socket 0 00:05:03.835 EAL: Detected lcore 86 as core 14 on socket 0 00:05:03.835 EAL: Detected lcore 87 as core 15 on socket 0 00:05:03.835 EAL: Detected lcore 88 as core 16 on socket 0 00:05:03.835 EAL: Detected lcore 89 as core 17 on socket 0 00:05:03.835 EAL: Detected lcore 90 as core 18 on socket 0 00:05:03.835 EAL: Detected lcore 91 as core 19 on socket 0 00:05:03.835 EAL: Detected lcore 92 as core 20 on socket 0 00:05:03.835 EAL: Detected lcore 93 as core 21 on socket 0 00:05:03.835 EAL: Detected lcore 94 as core 22 on socket 0 00:05:03.835 EAL: Detected lcore 95 as core 23 on socket 0 00:05:03.835 EAL: Detected lcore 96 as core 24 on socket 0 00:05:03.835 EAL: Detected lcore 97 as core 25 on socket 0 00:05:03.835 EAL: Detected lcore 98 as core 26 on socket 0 00:05:03.835 EAL: Detected lcore 99 as core 27 on socket 0 00:05:03.835 EAL: Detected lcore 100 as core 28 on socket 0 00:05:03.835 EAL: Detected lcore 101 as core 29 on socket 0 00:05:03.835 EAL: Detected lcore 102 as core 30 on socket 0 00:05:03.835 EAL: Detected lcore 103 as core 31 on socket 0 00:05:03.835 EAL: Detected lcore 104 as core 32 on socket 0 00:05:03.835 EAL: Detected lcore 105 as core 33 on socket 0 00:05:03.835 EAL: Detected lcore 106 as core 34 on socket 0 00:05:03.835 EAL: Detected lcore 107 as core 35 on socket 0 00:05:03.835 EAL: Detected lcore 108 as core 0 on socket 1 00:05:03.835 EAL: Detected lcore 109 as core 1 on socket 1 00:05:03.835 EAL: Detected lcore 110 as core 2 on socket 1 00:05:03.835 EAL: Detected lcore 111 as core 3 on socket 1 00:05:03.835 EAL: Detected lcore 112 as core 4 on socket 1 00:05:03.835 EAL: Detected lcore 113 as core 5 on socket 1 00:05:03.835 EAL: Detected lcore 114 as core 6 on socket 1 00:05:03.835 EAL: Detected lcore 115 as core 7 on socket 1 00:05:03.835 EAL: Detected lcore 116 as core 8 on socket 1 00:05:03.835 EAL: Detected lcore 117 as core 9 on socket 1 00:05:03.835 EAL: Detected lcore 118 as core 10 on socket 1 00:05:03.835 EAL: Detected lcore 119 as core 11 on socket 1 00:05:03.835 EAL: Detected lcore 120 as core 12 on socket 1 00:05:03.835 EAL: Detected lcore 121 as core 13 on socket 1 00:05:03.835 EAL: Detected lcore 122 as core 14 on socket 1 00:05:03.835 EAL: Detected lcore 123 as core 15 on socket 1 00:05:03.835 EAL: Detected lcore 124 as core 16 on socket 1 00:05:03.835 EAL: Detected lcore 125 as core 17 on socket 1 00:05:03.835 EAL: Detected lcore 126 as core 18 on socket 1 00:05:03.835 EAL: Detected lcore 127 as core 19 on socket 1 00:05:03.835 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:03.835 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:03.835 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:03.835 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:03.835 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:03.835 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:03.835 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:03.835 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:03.835 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:03.835 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:03.835 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:03.836 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:03.836 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:03.836 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:03.836 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:03.836 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:03.836 EAL: Maximum logical cores by configuration: 128 00:05:03.836 EAL: Detected CPU lcores: 128 00:05:03.836 EAL: Detected NUMA nodes: 2 00:05:03.836 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:03.836 EAL: Detected shared linkage of DPDK 00:05:03.836 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.836 EAL: Bus pci wants IOVA as 'DC' 00:05:03.836 EAL: Buses did not request a specific IOVA mode. 00:05:03.836 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:03.836 EAL: Selected IOVA mode 'VA' 00:05:03.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.836 EAL: Probing VFIO support... 00:05:03.836 EAL: IOMMU type 1 (Type 1) is supported 00:05:03.836 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:03.836 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:03.836 EAL: VFIO support initialized 00:05:03.836 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.836 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.836 EAL: Setting up physically contiguous memory... 00:05:03.836 EAL: Setting maximum number of open files to 524288 00:05:03.836 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.836 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:03.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:03.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.836 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:03.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.836 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:03.836 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:03.836 EAL: Hugepages will be freed exactly as allocated. 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: TSC frequency is ~2400000 KHz 00:05:03.836 EAL: Main lcore 0 is ready (tid=7f6595552a00;cpuset=[0]) 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 0 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.836 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.836 00:05:03.836 00:05:03.836 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.836 http://cunit.sourceforge.net/ 00:05:03.836 00:05:03.836 00:05:03.836 Suite: components_suite 00:05:03.836 Test: vtophys_malloc_test ...passed 00:05:03.836 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.836 EAL: Restoring previous memory policy: 4 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.836 EAL: request: mp_malloc_sync 00:05:03.836 EAL: No shared files mode enabled, IPC is disabled 00:05:03.836 EAL: Heap on socket 0 was shrunk by 130MB 00:05:03.836 EAL: Trying to obtain current memory policy. 00:05:03.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.097 EAL: Restoring previous memory policy: 4 00:05:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.097 EAL: request: mp_malloc_sync 00:05:04.097 EAL: No shared files mode enabled, IPC is disabled 00:05:04.097 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.097 EAL: request: mp_malloc_sync 00:05:04.097 EAL: No shared files mode enabled, IPC is disabled 00:05:04.097 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.097 EAL: Trying to obtain current memory policy. 00:05:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.097 EAL: Restoring previous memory policy: 4 00:05:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.097 EAL: request: mp_malloc_sync 00:05:04.097 EAL: No shared files mode enabled, IPC is disabled 00:05:04.097 EAL: Heap on socket 0 was expanded by 514MB 00:05:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.097 EAL: request: mp_malloc_sync 00:05:04.097 EAL: No shared files mode enabled, IPC is disabled 00:05:04.097 EAL: Heap on socket 0 was shrunk by 514MB 00:05:04.097 EAL: Trying to obtain current memory policy. 00:05:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.358 EAL: Restoring previous memory policy: 4 00:05:04.358 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.358 EAL: request: mp_malloc_sync 00:05:04.358 EAL: No shared files mode enabled, IPC is disabled 00:05:04.358 EAL: Heap on socket 0 was expanded by 1026MB 00:05:04.358 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.619 EAL: request: mp_malloc_sync 00:05:04.619 EAL: No shared files mode enabled, IPC is disabled 00:05:04.619 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:04.619 passed 00:05:04.619 00:05:04.619 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.619 suites 1 1 n/a 0 0 00:05:04.619 tests 2 2 2 0 0 00:05:04.619 asserts 497 497 497 0 n/a 00:05:04.619 00:05:04.619 Elapsed time = 0.648 seconds 00:05:04.619 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.619 EAL: request: mp_malloc_sync 00:05:04.619 EAL: No shared files mode enabled, IPC is disabled 00:05:04.619 EAL: Heap on socket 0 was shrunk by 2MB 00:05:04.619 EAL: No shared files mode enabled, IPC is disabled 00:05:04.619 EAL: No shared files mode enabled, IPC is disabled 00:05:04.619 EAL: No shared files mode enabled, IPC is disabled 00:05:04.619 00:05:04.619 real 0m0.778s 00:05:04.619 user 0m0.416s 00:05:04.619 sys 0m0.329s 00:05:04.619 20:18:20 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.619 20:18:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:04.619 ************************************ 00:05:04.619 END TEST env_vtophys 00:05:04.619 ************************************ 00:05:04.619 20:18:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.619 20:18:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.619 20:18:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.619 20:18:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.619 ************************************ 00:05:04.619 START TEST env_pci 00:05:04.619 ************************************ 00:05:04.619 20:18:20 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.619 00:05:04.619 00:05:04.619 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.619 http://cunit.sourceforge.net/ 00:05:04.619 00:05:04.619 00:05:04.619 Suite: pci 00:05:04.619 Test: pci_hook ...[2024-05-13 20:18:20.468683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2812607 has claimed it 00:05:04.619 EAL: Cannot find device (10000:00:01.0) 00:05:04.619 EAL: Failed to attach device on primary process 00:05:04.619 passed 00:05:04.619 00:05:04.619 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.619 suites 1 1 n/a 0 0 00:05:04.619 tests 1 1 1 0 0 00:05:04.619 asserts 25 25 25 0 n/a 00:05:04.619 00:05:04.619 Elapsed time = 0.034 seconds 00:05:04.619 00:05:04.619 real 0m0.055s 00:05:04.619 user 0m0.015s 00:05:04.619 sys 0m0.039s 00:05:04.619 20:18:20 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.619 20:18:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.619 ************************************ 00:05:04.619 END TEST env_pci 00:05:04.619 ************************************ 00:05:04.619 20:18:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.619 20:18:20 env -- env/env.sh@15 -- # uname 00:05:04.619 20:18:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.619 20:18:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.619 20:18:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.619 20:18:20 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:04.619 20:18:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.619 20:18:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.880 ************************************ 00:05:04.880 START TEST env_dpdk_post_init 00:05:04.880 ************************************ 00:05:04.880 20:18:20 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.880 EAL: Detected CPU lcores: 128 00:05:04.880 EAL: Detected NUMA nodes: 2 00:05:04.880 EAL: Detected shared linkage of DPDK 00:05:04.880 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.880 EAL: Selected IOVA mode 'VA' 00:05:04.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.880 EAL: VFIO support initialized 00:05:04.880 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.880 EAL: Using IOMMU type 1 (Type 1) 00:05:05.140 EAL: Ignore mapping IO port bar(1) 00:05:05.140 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:05.140 EAL: Ignore mapping IO port bar(1) 00:05:05.399 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:05.399 EAL: Ignore mapping IO port bar(1) 00:05:05.657 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:05.657 EAL: Ignore mapping IO port bar(1) 00:05:05.916 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:05.916 EAL: Ignore mapping IO port bar(1) 00:05:05.916 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:06.176 EAL: Ignore mapping IO port bar(1) 00:05:06.176 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:06.437 EAL: Ignore mapping IO port bar(1) 00:05:06.437 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:06.697 EAL: Ignore mapping IO port bar(1) 00:05:06.697 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:06.958 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:06.958 EAL: Ignore mapping IO port bar(1) 00:05:07.218 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:07.218 EAL: Ignore mapping IO port bar(1) 00:05:07.479 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:07.479 EAL: Ignore mapping IO port bar(1) 00:05:07.479 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:07.740 EAL: Ignore mapping IO port bar(1) 00:05:07.740 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:07.999 EAL: Ignore mapping IO port bar(1) 00:05:07.999 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:08.258 EAL: Ignore mapping IO port bar(1) 00:05:08.258 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:08.258 EAL: Ignore mapping IO port bar(1) 00:05:08.518 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:08.518 EAL: Ignore mapping IO port bar(1) 00:05:08.778 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:08.778 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:08.778 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:08.778 Starting DPDK initialization... 00:05:08.778 Starting SPDK post initialization... 00:05:08.778 SPDK NVMe probe 00:05:08.778 Attaching to 0000:65:00.0 00:05:08.778 Attached to 0000:65:00.0 00:05:08.778 Cleaning up... 00:05:10.691 00:05:10.691 real 0m5.726s 00:05:10.691 user 0m0.191s 00:05:10.691 sys 0m0.077s 00:05:10.691 20:18:26 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.691 20:18:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.691 ************************************ 00:05:10.691 END TEST env_dpdk_post_init 00:05:10.691 ************************************ 00:05:10.691 20:18:26 env -- env/env.sh@26 -- # uname 00:05:10.691 20:18:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.691 20:18:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.691 20:18:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.691 20:18:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.691 20:18:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.691 ************************************ 00:05:10.691 START TEST env_mem_callbacks 00:05:10.691 ************************************ 00:05:10.691 20:18:26 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.691 EAL: Detected CPU lcores: 128 00:05:10.691 EAL: Detected NUMA nodes: 2 00:05:10.691 EAL: Detected shared linkage of DPDK 00:05:10.691 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.691 EAL: Selected IOVA mode 'VA' 00:05:10.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.692 EAL: VFIO support initialized 00:05:10.692 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.692 00:05:10.692 00:05:10.692 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.692 http://cunit.sourceforge.net/ 00:05:10.692 00:05:10.692 00:05:10.692 Suite: memory 00:05:10.692 Test: test ... 00:05:10.692 register 0x200000200000 2097152 00:05:10.692 malloc 3145728 00:05:10.692 register 0x200000400000 4194304 00:05:10.692 buf 0x200000500000 len 3145728 PASSED 00:05:10.692 malloc 64 00:05:10.692 buf 0x2000004fff40 len 64 PASSED 00:05:10.692 malloc 4194304 00:05:10.692 register 0x200000800000 6291456 00:05:10.692 buf 0x200000a00000 len 4194304 PASSED 00:05:10.692 free 0x200000500000 3145728 00:05:10.692 free 0x2000004fff40 64 00:05:10.692 unregister 0x200000400000 4194304 PASSED 00:05:10.692 free 0x200000a00000 4194304 00:05:10.692 unregister 0x200000800000 6291456 PASSED 00:05:10.692 malloc 8388608 00:05:10.692 register 0x200000400000 10485760 00:05:10.692 buf 0x200000600000 len 8388608 PASSED 00:05:10.692 free 0x200000600000 8388608 00:05:10.692 unregister 0x200000400000 10485760 PASSED 00:05:10.692 passed 00:05:10.692 00:05:10.692 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.692 suites 1 1 n/a 0 0 00:05:10.692 tests 1 1 1 0 0 00:05:10.692 asserts 15 15 15 0 n/a 00:05:10.692 00:05:10.692 Elapsed time = 0.004 seconds 00:05:10.692 00:05:10.692 real 0m0.062s 00:05:10.692 user 0m0.025s 00:05:10.692 sys 0m0.037s 00:05:10.692 20:18:26 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.692 20:18:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.692 ************************************ 00:05:10.692 END TEST env_mem_callbacks 00:05:10.692 ************************************ 00:05:10.692 00:05:10.692 real 0m7.347s 00:05:10.692 user 0m1.046s 00:05:10.692 sys 0m0.820s 00:05:10.692 20:18:26 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.692 20:18:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.692 ************************************ 00:05:10.692 END TEST env 00:05:10.692 ************************************ 00:05:10.692 20:18:26 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.692 20:18:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.692 20:18:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.692 20:18:26 -- common/autotest_common.sh@10 -- # set +x 00:05:10.692 ************************************ 00:05:10.692 START TEST rpc 00:05:10.692 ************************************ 00:05:10.692 20:18:26 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.951 * Looking for test storage... 00:05:10.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.951 20:18:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2813796 00:05:10.951 20:18:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.951 20:18:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.951 20:18:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2813796 00:05:10.951 20:18:26 rpc -- common/autotest_common.sh@827 -- # '[' -z 2813796 ']' 00:05:10.951 20:18:26 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.951 20:18:26 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:10.951 20:18:26 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.951 20:18:26 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:10.951 20:18:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.951 [2024-05-13 20:18:26.721056] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:10.952 [2024-05-13 20:18:26.721112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813796 ] 00:05:10.952 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.952 [2024-05-13 20:18:26.789566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.952 [2024-05-13 20:18:26.855577] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.952 [2024-05-13 20:18:26.855617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2813796' to capture a snapshot of events at runtime. 00:05:10.952 [2024-05-13 20:18:26.855624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.952 [2024-05-13 20:18:26.855631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.952 [2024-05-13 20:18:26.855636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2813796 for offline analysis/debug. 00:05:10.952 [2024-05-13 20:18:26.855663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.893 20:18:27 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.893 20:18:27 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:11.893 20:18:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.893 20:18:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.893 20:18:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:11.893 20:18:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:11.893 20:18:27 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.893 20:18:27 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.893 20:18:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.893 ************************************ 00:05:11.893 START TEST rpc_integrity 00:05:11.893 ************************************ 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.893 { 00:05:11.893 "name": "Malloc0", 00:05:11.893 "aliases": [ 00:05:11.893 "08448ccc-5ae6-4bae-845f-754c6ca85321" 00:05:11.893 ], 00:05:11.893 "product_name": "Malloc disk", 00:05:11.893 "block_size": 512, 00:05:11.893 "num_blocks": 16384, 00:05:11.893 "uuid": "08448ccc-5ae6-4bae-845f-754c6ca85321", 00:05:11.893 "assigned_rate_limits": { 00:05:11.893 "rw_ios_per_sec": 0, 00:05:11.893 "rw_mbytes_per_sec": 0, 00:05:11.893 "r_mbytes_per_sec": 0, 00:05:11.893 "w_mbytes_per_sec": 0 00:05:11.893 }, 00:05:11.893 "claimed": false, 00:05:11.893 "zoned": false, 00:05:11.893 "supported_io_types": { 00:05:11.893 "read": true, 00:05:11.893 "write": true, 00:05:11.893 "unmap": true, 00:05:11.893 "write_zeroes": true, 00:05:11.893 "flush": true, 00:05:11.893 "reset": true, 00:05:11.893 "compare": false, 00:05:11.893 "compare_and_write": false, 00:05:11.893 "abort": true, 00:05:11.893 "nvme_admin": false, 00:05:11.893 "nvme_io": false 00:05:11.893 }, 00:05:11.893 "memory_domains": [ 00:05:11.893 { 00:05:11.893 "dma_device_id": "system", 00:05:11.893 "dma_device_type": 1 00:05:11.893 }, 00:05:11.893 { 00:05:11.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.893 "dma_device_type": 2 00:05:11.893 } 00:05:11.893 ], 00:05:11.893 "driver_specific": {} 00:05:11.893 } 00:05:11.893 ]' 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.893 [2024-05-13 20:18:27.665394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.893 [2024-05-13 20:18:27.665427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.893 [2024-05-13 20:18:27.665439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189a2f0 00:05:11.893 [2024-05-13 20:18:27.665446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.893 [2024-05-13 20:18:27.666920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.893 [2024-05-13 20:18:27.666940] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.893 Passthru0 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.893 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.893 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.893 { 00:05:11.893 "name": "Malloc0", 00:05:11.893 "aliases": [ 00:05:11.893 "08448ccc-5ae6-4bae-845f-754c6ca85321" 00:05:11.893 ], 00:05:11.893 "product_name": "Malloc disk", 00:05:11.893 "block_size": 512, 00:05:11.893 "num_blocks": 16384, 00:05:11.893 "uuid": "08448ccc-5ae6-4bae-845f-754c6ca85321", 00:05:11.893 "assigned_rate_limits": { 00:05:11.893 "rw_ios_per_sec": 0, 00:05:11.893 "rw_mbytes_per_sec": 0, 00:05:11.893 "r_mbytes_per_sec": 0, 00:05:11.893 "w_mbytes_per_sec": 0 00:05:11.893 }, 00:05:11.893 "claimed": true, 00:05:11.893 "claim_type": "exclusive_write", 00:05:11.893 "zoned": false, 00:05:11.893 "supported_io_types": { 00:05:11.893 "read": true, 00:05:11.893 "write": true, 00:05:11.893 "unmap": true, 00:05:11.893 "write_zeroes": true, 00:05:11.893 "flush": true, 00:05:11.893 "reset": true, 00:05:11.893 "compare": false, 00:05:11.893 "compare_and_write": false, 00:05:11.893 "abort": true, 00:05:11.893 "nvme_admin": false, 00:05:11.893 "nvme_io": false 00:05:11.893 }, 00:05:11.893 "memory_domains": [ 00:05:11.893 { 00:05:11.893 "dma_device_id": "system", 00:05:11.893 "dma_device_type": 1 00:05:11.893 }, 00:05:11.894 { 00:05:11.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.894 "dma_device_type": 2 00:05:11.894 } 00:05:11.894 ], 00:05:11.894 "driver_specific": {} 00:05:11.894 }, 00:05:11.894 { 00:05:11.894 "name": "Passthru0", 00:05:11.894 "aliases": [ 00:05:11.894 "c6fc9394-9c8e-5800-8771-b64f5a9a166c" 00:05:11.894 ], 00:05:11.894 "product_name": "passthru", 00:05:11.894 "block_size": 512, 00:05:11.894 "num_blocks": 16384, 00:05:11.894 "uuid": "c6fc9394-9c8e-5800-8771-b64f5a9a166c", 00:05:11.894 "assigned_rate_limits": { 00:05:11.894 "rw_ios_per_sec": 0, 00:05:11.894 "rw_mbytes_per_sec": 0, 00:05:11.894 "r_mbytes_per_sec": 0, 00:05:11.894 "w_mbytes_per_sec": 0 00:05:11.894 }, 00:05:11.894 "claimed": false, 00:05:11.894 "zoned": false, 00:05:11.894 "supported_io_types": { 00:05:11.894 "read": true, 00:05:11.894 "write": true, 00:05:11.894 "unmap": true, 00:05:11.894 "write_zeroes": true, 00:05:11.894 "flush": true, 00:05:11.894 "reset": true, 00:05:11.894 "compare": false, 00:05:11.894 "compare_and_write": false, 00:05:11.894 "abort": true, 00:05:11.894 "nvme_admin": false, 00:05:11.894 "nvme_io": false 00:05:11.894 }, 00:05:11.894 "memory_domains": [ 00:05:11.894 { 00:05:11.894 "dma_device_id": "system", 00:05:11.894 "dma_device_type": 1 00:05:11.894 }, 00:05:11.894 { 00:05:11.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.894 "dma_device_type": 2 00:05:11.894 } 00:05:11.894 ], 00:05:11.894 "driver_specific": { 00:05:11.894 "passthru": { 00:05:11.894 "name": "Passthru0", 00:05:11.894 "base_bdev_name": "Malloc0" 00:05:11.894 } 00:05:11.894 } 00:05:11.894 } 00:05:11.894 ]' 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.894 20:18:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.894 00:05:11.894 real 0m0.290s 00:05:11.894 user 0m0.188s 00:05:11.894 sys 0m0.037s 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.894 20:18:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.894 ************************************ 00:05:11.894 END TEST rpc_integrity 00:05:11.894 ************************************ 00:05:12.155 20:18:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.155 20:18:27 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.155 20:18:27 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.155 20:18:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.155 ************************************ 00:05:12.155 START TEST rpc_plugins 00:05:12.155 ************************************ 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.155 { 00:05:12.155 "name": "Malloc1", 00:05:12.155 "aliases": [ 00:05:12.155 "6b4d7ab6-fb48-43e4-a17e-b1f1a4d66f30" 00:05:12.155 ], 00:05:12.155 "product_name": "Malloc disk", 00:05:12.155 "block_size": 4096, 00:05:12.155 "num_blocks": 256, 00:05:12.155 "uuid": "6b4d7ab6-fb48-43e4-a17e-b1f1a4d66f30", 00:05:12.155 "assigned_rate_limits": { 00:05:12.155 "rw_ios_per_sec": 0, 00:05:12.155 "rw_mbytes_per_sec": 0, 00:05:12.155 "r_mbytes_per_sec": 0, 00:05:12.155 "w_mbytes_per_sec": 0 00:05:12.155 }, 00:05:12.155 "claimed": false, 00:05:12.155 "zoned": false, 00:05:12.155 "supported_io_types": { 00:05:12.155 "read": true, 00:05:12.155 "write": true, 00:05:12.155 "unmap": true, 00:05:12.155 "write_zeroes": true, 00:05:12.155 "flush": true, 00:05:12.155 "reset": true, 00:05:12.155 "compare": false, 00:05:12.155 "compare_and_write": false, 00:05:12.155 "abort": true, 00:05:12.155 "nvme_admin": false, 00:05:12.155 "nvme_io": false 00:05:12.155 }, 00:05:12.155 "memory_domains": [ 00:05:12.155 { 00:05:12.155 "dma_device_id": "system", 00:05:12.155 "dma_device_type": 1 00:05:12.155 }, 00:05:12.155 { 00:05:12.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.155 "dma_device_type": 2 00:05:12.155 } 00:05:12.155 ], 00:05:12.155 "driver_specific": {} 00:05:12.155 } 00:05:12.155 ]' 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.155 20:18:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.155 20:18:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.155 20:18:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.155 20:18:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.155 20:18:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.155 20:18:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.155 00:05:12.155 real 0m0.154s 00:05:12.155 user 0m0.096s 00:05:12.155 sys 0m0.020s 00:05:12.155 20:18:28 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.155 20:18:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.155 ************************************ 00:05:12.155 END TEST rpc_plugins 00:05:12.155 ************************************ 00:05:12.155 20:18:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:12.155 20:18:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.155 20:18:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.155 20:18:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.414 ************************************ 00:05:12.414 START TEST rpc_trace_cmd_test 00:05:12.414 ************************************ 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:12.414 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2813796", 00:05:12.414 "tpoint_group_mask": "0x8", 00:05:12.414 "iscsi_conn": { 00:05:12.414 "mask": "0x2", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "scsi": { 00:05:12.414 "mask": "0x4", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "bdev": { 00:05:12.414 "mask": "0x8", 00:05:12.414 "tpoint_mask": "0xffffffffffffffff" 00:05:12.414 }, 00:05:12.414 "nvmf_rdma": { 00:05:12.414 "mask": "0x10", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "nvmf_tcp": { 00:05:12.414 "mask": "0x20", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "ftl": { 00:05:12.414 "mask": "0x40", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "blobfs": { 00:05:12.414 "mask": "0x80", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "dsa": { 00:05:12.414 "mask": "0x200", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "thread": { 00:05:12.414 "mask": "0x400", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "nvme_pcie": { 00:05:12.414 "mask": "0x800", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "iaa": { 00:05:12.414 "mask": "0x1000", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "nvme_tcp": { 00:05:12.414 "mask": "0x2000", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "bdev_nvme": { 00:05:12.414 "mask": "0x4000", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 }, 00:05:12.414 "sock": { 00:05:12.414 "mask": "0x8000", 00:05:12.414 "tpoint_mask": "0x0" 00:05:12.414 } 00:05:12.414 }' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:12.414 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:12.676 20:18:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:12.676 00:05:12.676 real 0m0.247s 00:05:12.676 user 0m0.209s 00:05:12.676 sys 0m0.029s 00:05:12.676 20:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.676 20:18:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.676 ************************************ 00:05:12.676 END TEST rpc_trace_cmd_test 00:05:12.676 ************************************ 00:05:12.676 20:18:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:12.676 20:18:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:12.676 20:18:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:12.676 20:18:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.676 20:18:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.676 20:18:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.676 ************************************ 00:05:12.676 START TEST rpc_daemon_integrity 00:05:12.676 ************************************ 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.676 { 00:05:12.676 "name": "Malloc2", 00:05:12.676 "aliases": [ 00:05:12.676 "93f8d8cc-b18f-4c55-83f4-86f3f3672ec5" 00:05:12.676 ], 00:05:12.676 "product_name": "Malloc disk", 00:05:12.676 "block_size": 512, 00:05:12.676 "num_blocks": 16384, 00:05:12.676 "uuid": "93f8d8cc-b18f-4c55-83f4-86f3f3672ec5", 00:05:12.676 "assigned_rate_limits": { 00:05:12.676 "rw_ios_per_sec": 0, 00:05:12.676 "rw_mbytes_per_sec": 0, 00:05:12.676 "r_mbytes_per_sec": 0, 00:05:12.676 "w_mbytes_per_sec": 0 00:05:12.676 }, 00:05:12.676 "claimed": false, 00:05:12.676 "zoned": false, 00:05:12.676 "supported_io_types": { 00:05:12.676 "read": true, 00:05:12.676 "write": true, 00:05:12.676 "unmap": true, 00:05:12.676 "write_zeroes": true, 00:05:12.676 "flush": true, 00:05:12.676 "reset": true, 00:05:12.676 "compare": false, 00:05:12.676 "compare_and_write": false, 00:05:12.676 "abort": true, 00:05:12.676 "nvme_admin": false, 00:05:12.676 "nvme_io": false 00:05:12.676 }, 00:05:12.676 "memory_domains": [ 00:05:12.676 { 00:05:12.676 "dma_device_id": "system", 00:05:12.676 "dma_device_type": 1 00:05:12.676 }, 00:05:12.676 { 00:05:12.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.676 "dma_device_type": 2 00:05:12.676 } 00:05:12.676 ], 00:05:12.676 "driver_specific": {} 00:05:12.676 } 00:05:12.676 ]' 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.676 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.676 [2024-05-13 20:18:28.595895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.676 [2024-05-13 20:18:28.595922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.676 [2024-05-13 20:18:28.595936] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1899fd0 00:05:12.677 [2024-05-13 20:18:28.595943] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.677 [2024-05-13 20:18:28.597140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.677 [2024-05-13 20:18:28.597159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.677 Passthru0 00:05:12.677 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.677 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.677 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.677 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.937 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.937 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.937 { 00:05:12.937 "name": "Malloc2", 00:05:12.937 "aliases": [ 00:05:12.937 "93f8d8cc-b18f-4c55-83f4-86f3f3672ec5" 00:05:12.937 ], 00:05:12.937 "product_name": "Malloc disk", 00:05:12.937 "block_size": 512, 00:05:12.937 "num_blocks": 16384, 00:05:12.937 "uuid": "93f8d8cc-b18f-4c55-83f4-86f3f3672ec5", 00:05:12.937 "assigned_rate_limits": { 00:05:12.937 "rw_ios_per_sec": 0, 00:05:12.937 "rw_mbytes_per_sec": 0, 00:05:12.937 "r_mbytes_per_sec": 0, 00:05:12.937 "w_mbytes_per_sec": 0 00:05:12.937 }, 00:05:12.937 "claimed": true, 00:05:12.937 "claim_type": "exclusive_write", 00:05:12.937 "zoned": false, 00:05:12.937 "supported_io_types": { 00:05:12.937 "read": true, 00:05:12.938 "write": true, 00:05:12.938 "unmap": true, 00:05:12.938 "write_zeroes": true, 00:05:12.938 "flush": true, 00:05:12.938 "reset": true, 00:05:12.938 "compare": false, 00:05:12.938 "compare_and_write": false, 00:05:12.938 "abort": true, 00:05:12.938 "nvme_admin": false, 00:05:12.938 "nvme_io": false 00:05:12.938 }, 00:05:12.938 "memory_domains": [ 00:05:12.938 { 00:05:12.938 "dma_device_id": "system", 00:05:12.938 "dma_device_type": 1 00:05:12.938 }, 00:05:12.938 { 00:05:12.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.938 "dma_device_type": 2 00:05:12.938 } 00:05:12.938 ], 00:05:12.938 "driver_specific": {} 00:05:12.938 }, 00:05:12.938 { 00:05:12.938 "name": "Passthru0", 00:05:12.938 "aliases": [ 00:05:12.938 "dfacedc8-2c17-5d1e-b29d-5c7daaba28ba" 00:05:12.938 ], 00:05:12.938 "product_name": "passthru", 00:05:12.938 "block_size": 512, 00:05:12.938 "num_blocks": 16384, 00:05:12.938 "uuid": "dfacedc8-2c17-5d1e-b29d-5c7daaba28ba", 00:05:12.938 "assigned_rate_limits": { 00:05:12.938 "rw_ios_per_sec": 0, 00:05:12.938 "rw_mbytes_per_sec": 0, 00:05:12.938 "r_mbytes_per_sec": 0, 00:05:12.938 "w_mbytes_per_sec": 0 00:05:12.938 }, 00:05:12.938 "claimed": false, 00:05:12.938 "zoned": false, 00:05:12.938 "supported_io_types": { 00:05:12.938 "read": true, 00:05:12.938 "write": true, 00:05:12.938 "unmap": true, 00:05:12.938 "write_zeroes": true, 00:05:12.938 "flush": true, 00:05:12.938 "reset": true, 00:05:12.938 "compare": false, 00:05:12.938 "compare_and_write": false, 00:05:12.938 "abort": true, 00:05:12.938 "nvme_admin": false, 00:05:12.938 "nvme_io": false 00:05:12.938 }, 00:05:12.938 "memory_domains": [ 00:05:12.938 { 00:05:12.938 "dma_device_id": "system", 00:05:12.938 "dma_device_type": 1 00:05:12.938 }, 00:05:12.938 { 00:05:12.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.938 "dma_device_type": 2 00:05:12.938 } 00:05:12.938 ], 00:05:12.938 "driver_specific": { 00:05:12.938 "passthru": { 00:05:12.938 "name": "Passthru0", 00:05:12.938 "base_bdev_name": "Malloc2" 00:05:12.938 } 00:05:12.938 } 00:05:12.938 } 00:05:12.938 ]' 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.938 00:05:12.938 real 0m0.279s 00:05:12.938 user 0m0.191s 00:05:12.938 sys 0m0.028s 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.938 20:18:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.938 ************************************ 00:05:12.938 END TEST rpc_daemon_integrity 00:05:12.938 ************************************ 00:05:12.938 20:18:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:12.938 20:18:28 rpc -- rpc/rpc.sh@84 -- # killprocess 2813796 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@946 -- # '[' -z 2813796 ']' 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@950 -- # kill -0 2813796 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@951 -- # uname 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2813796 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2813796' 00:05:12.938 killing process with pid 2813796 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@965 -- # kill 2813796 00:05:12.938 20:18:28 rpc -- common/autotest_common.sh@970 -- # wait 2813796 00:05:13.199 00:05:13.199 real 0m2.479s 00:05:13.199 user 0m3.267s 00:05:13.199 sys 0m0.682s 00:05:13.199 20:18:29 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.199 20:18:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.199 ************************************ 00:05:13.199 END TEST rpc 00:05:13.199 ************************************ 00:05:13.199 20:18:29 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.199 20:18:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.199 20:18:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.199 20:18:29 -- common/autotest_common.sh@10 -- # set +x 00:05:13.199 ************************************ 00:05:13.199 START TEST skip_rpc 00:05:13.199 ************************************ 00:05:13.199 20:18:29 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.460 * Looking for test storage... 00:05:13.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.460 20:18:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.460 20:18:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.460 20:18:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.460 20:18:29 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.460 20:18:29 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.460 20:18:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.460 ************************************ 00:05:13.460 START TEST skip_rpc 00:05:13.460 ************************************ 00:05:13.460 20:18:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:13.460 20:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2814628 00:05:13.460 20:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.460 20:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.460 20:18:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.460 [2024-05-13 20:18:29.305125] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:13.460 [2024-05-13 20:18:29.305169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814628 ] 00:05:13.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.460 [2024-05-13 20:18:29.370973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.720 [2024-05-13 20:18:29.436815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2814628 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2814628 ']' 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2814628 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2814628 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2814628' 00:05:19.085 killing process with pid 2814628 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2814628 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2814628 00:05:19.085 00:05:19.085 real 0m5.265s 00:05:19.085 user 0m5.069s 00:05:19.085 sys 0m0.225s 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.085 20:18:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.085 ************************************ 00:05:19.085 END TEST skip_rpc 00:05:19.085 ************************************ 00:05:19.085 20:18:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.085 20:18:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.085 20:18:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.085 20:18:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.085 ************************************ 00:05:19.085 START TEST skip_rpc_with_json 00:05:19.085 ************************************ 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2815671 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2815671 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2815671 ']' 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.085 20:18:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.085 [2024-05-13 20:18:34.649807] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:19.085 [2024-05-13 20:18:34.649858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815671 ] 00:05:19.085 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.086 [2024-05-13 20:18:34.716419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.086 [2024-05-13 20:18:34.785341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.655 [2024-05-13 20:18:35.442937] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:19.655 request: 00:05:19.655 { 00:05:19.655 "trtype": "tcp", 00:05:19.655 "method": "nvmf_get_transports", 00:05:19.655 "req_id": 1 00:05:19.655 } 00:05:19.655 Got JSON-RPC error response 00:05:19.655 response: 00:05:19.655 { 00:05:19.655 "code": -19, 00:05:19.655 "message": "No such device" 00:05:19.655 } 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.655 [2024-05-13 20:18:35.451039] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.655 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.655 { 00:05:19.655 "subsystems": [ 00:05:19.655 { 00:05:19.655 "subsystem": "keyring", 00:05:19.655 "config": [] 00:05:19.655 }, 00:05:19.655 { 00:05:19.655 "subsystem": "iobuf", 00:05:19.655 "config": [ 00:05:19.655 { 00:05:19.655 "method": "iobuf_set_options", 00:05:19.655 "params": { 00:05:19.655 "small_pool_count": 8192, 00:05:19.655 "large_pool_count": 1024, 00:05:19.655 "small_bufsize": 8192, 00:05:19.655 "large_bufsize": 135168 00:05:19.655 } 00:05:19.655 } 00:05:19.655 ] 00:05:19.655 }, 00:05:19.655 { 00:05:19.655 "subsystem": "sock", 00:05:19.655 "config": [ 00:05:19.655 { 00:05:19.655 "method": "sock_impl_set_options", 00:05:19.655 "params": { 00:05:19.655 "impl_name": "posix", 00:05:19.655 "recv_buf_size": 2097152, 00:05:19.655 "send_buf_size": 2097152, 00:05:19.655 "enable_recv_pipe": true, 00:05:19.655 "enable_quickack": false, 00:05:19.655 "enable_placement_id": 0, 00:05:19.655 "enable_zerocopy_send_server": true, 00:05:19.655 "enable_zerocopy_send_client": false, 00:05:19.655 "zerocopy_threshold": 0, 00:05:19.655 "tls_version": 0, 00:05:19.655 "enable_ktls": false 00:05:19.655 } 00:05:19.655 }, 00:05:19.656 { 00:05:19.656 "method": "sock_impl_set_options", 00:05:19.656 "params": { 00:05:19.656 "impl_name": "ssl", 00:05:19.656 "recv_buf_size": 4096, 00:05:19.656 "send_buf_size": 4096, 00:05:19.656 "enable_recv_pipe": true, 00:05:19.656 "enable_quickack": false, 00:05:19.656 "enable_placement_id": 0, 00:05:19.656 "enable_zerocopy_send_server": true, 00:05:19.656 "enable_zerocopy_send_client": false, 00:05:19.656 "zerocopy_threshold": 0, 00:05:19.656 "tls_version": 0, 00:05:19.656 "enable_ktls": false 00:05:19.656 } 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "vmd", 00:05:19.656 "config": [] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "accel", 00:05:19.656 "config": [ 00:05:19.656 { 00:05:19.656 "method": "accel_set_options", 00:05:19.656 "params": { 00:05:19.656 "small_cache_size": 128, 00:05:19.656 "large_cache_size": 16, 00:05:19.656 "task_count": 2048, 00:05:19.656 "sequence_count": 2048, 00:05:19.656 "buf_count": 2048 00:05:19.656 } 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "bdev", 00:05:19.656 "config": [ 00:05:19.656 { 00:05:19.656 "method": "bdev_set_options", 00:05:19.656 "params": { 00:05:19.656 "bdev_io_pool_size": 65535, 00:05:19.656 "bdev_io_cache_size": 256, 00:05:19.656 "bdev_auto_examine": true, 00:05:19.656 "iobuf_small_cache_size": 128, 00:05:19.656 "iobuf_large_cache_size": 16 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "bdev_raid_set_options", 00:05:19.656 "params": { 00:05:19.656 "process_window_size_kb": 1024 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "bdev_iscsi_set_options", 00:05:19.656 "params": { 00:05:19.656 "timeout_sec": 30 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "bdev_nvme_set_options", 00:05:19.656 "params": { 00:05:19.656 "action_on_timeout": "none", 00:05:19.656 "timeout_us": 0, 00:05:19.656 "timeout_admin_us": 0, 00:05:19.656 "keep_alive_timeout_ms": 10000, 00:05:19.656 "arbitration_burst": 0, 00:05:19.656 "low_priority_weight": 0, 00:05:19.656 "medium_priority_weight": 0, 00:05:19.656 "high_priority_weight": 0, 00:05:19.656 "nvme_adminq_poll_period_us": 10000, 00:05:19.656 "nvme_ioq_poll_period_us": 0, 00:05:19.656 "io_queue_requests": 0, 00:05:19.656 "delay_cmd_submit": true, 00:05:19.656 "transport_retry_count": 4, 00:05:19.656 "bdev_retry_count": 3, 00:05:19.656 "transport_ack_timeout": 0, 00:05:19.656 "ctrlr_loss_timeout_sec": 0, 00:05:19.656 "reconnect_delay_sec": 0, 00:05:19.656 "fast_io_fail_timeout_sec": 0, 00:05:19.656 "disable_auto_failback": false, 00:05:19.656 "generate_uuids": false, 00:05:19.656 "transport_tos": 0, 00:05:19.656 "nvme_error_stat": false, 00:05:19.656 "rdma_srq_size": 0, 00:05:19.656 "io_path_stat": false, 00:05:19.656 "allow_accel_sequence": false, 00:05:19.656 "rdma_max_cq_size": 0, 00:05:19.656 "rdma_cm_event_timeout_ms": 0, 00:05:19.656 "dhchap_digests": [ 00:05:19.656 "sha256", 00:05:19.656 "sha384", 00:05:19.656 "sha512" 00:05:19.656 ], 00:05:19.656 "dhchap_dhgroups": [ 00:05:19.656 "null", 00:05:19.656 "ffdhe2048", 00:05:19.656 "ffdhe3072", 00:05:19.656 "ffdhe4096", 00:05:19.656 "ffdhe6144", 00:05:19.656 "ffdhe8192" 00:05:19.656 ] 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "bdev_nvme_set_hotplug", 00:05:19.656 "params": { 00:05:19.656 "period_us": 100000, 00:05:19.656 "enable": false 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "bdev_wait_for_examine" 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "scsi", 00:05:19.656 "config": null 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "scheduler", 00:05:19.656 "config": [ 00:05:19.656 { 00:05:19.656 "method": "framework_set_scheduler", 00:05:19.656 "params": { 00:05:19.656 "name": "static" 00:05:19.656 } 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "vhost_scsi", 00:05:19.656 "config": [] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "vhost_blk", 00:05:19.656 "config": [] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "ublk", 00:05:19.656 "config": [] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "nbd", 00:05:19.656 "config": [] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "nvmf", 00:05:19.656 "config": [ 00:05:19.656 { 00:05:19.656 "method": "nvmf_set_config", 00:05:19.656 "params": { 00:05:19.656 "discovery_filter": "match_any", 00:05:19.656 "admin_cmd_passthru": { 00:05:19.656 "identify_ctrlr": false 00:05:19.656 } 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "nvmf_set_max_subsystems", 00:05:19.656 "params": { 00:05:19.656 "max_subsystems": 1024 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "nvmf_set_crdt", 00:05:19.656 "params": { 00:05:19.656 "crdt1": 0, 00:05:19.656 "crdt2": 0, 00:05:19.656 "crdt3": 0 00:05:19.656 } 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "method": "nvmf_create_transport", 00:05:19.656 "params": { 00:05:19.656 "trtype": "TCP", 00:05:19.656 "max_queue_depth": 128, 00:05:19.656 "max_io_qpairs_per_ctrlr": 127, 00:05:19.656 "in_capsule_data_size": 4096, 00:05:19.656 "max_io_size": 131072, 00:05:19.656 "io_unit_size": 131072, 00:05:19.656 "max_aq_depth": 128, 00:05:19.656 "num_shared_buffers": 511, 00:05:19.656 "buf_cache_size": 4294967295, 00:05:19.656 "dif_insert_or_strip": false, 00:05:19.656 "zcopy": false, 00:05:19.656 "c2h_success": true, 00:05:19.656 "sock_priority": 0, 00:05:19.656 "abort_timeout_sec": 1, 00:05:19.656 "ack_timeout": 0, 00:05:19.656 "data_wr_pool_size": 0 00:05:19.656 } 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 }, 00:05:19.656 { 00:05:19.656 "subsystem": "iscsi", 00:05:19.656 "config": [ 00:05:19.656 { 00:05:19.656 "method": "iscsi_set_options", 00:05:19.656 "params": { 00:05:19.656 "node_base": "iqn.2016-06.io.spdk", 00:05:19.656 "max_sessions": 128, 00:05:19.656 "max_connections_per_session": 2, 00:05:19.656 "max_queue_depth": 64, 00:05:19.656 "default_time2wait": 2, 00:05:19.656 "default_time2retain": 20, 00:05:19.656 "first_burst_length": 8192, 00:05:19.656 "immediate_data": true, 00:05:19.656 "allow_duplicated_isid": false, 00:05:19.656 "error_recovery_level": 0, 00:05:19.656 "nop_timeout": 60, 00:05:19.656 "nop_in_interval": 30, 00:05:19.656 "disable_chap": false, 00:05:19.656 "require_chap": false, 00:05:19.656 "mutual_chap": false, 00:05:19.656 "chap_group": 0, 00:05:19.656 "max_large_datain_per_connection": 64, 00:05:19.656 "max_r2t_per_connection": 4, 00:05:19.656 "pdu_pool_size": 36864, 00:05:19.656 "immediate_data_pool_size": 16384, 00:05:19.656 "data_out_pool_size": 2048 00:05:19.656 } 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 } 00:05:19.656 ] 00:05:19.656 } 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2815671 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2815671 ']' 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2815671 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.656 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2815671 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2815671' 00:05:19.917 killing process with pid 2815671 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2815671 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2815671 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2816013 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.917 20:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2816013 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2816013 ']' 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2816013 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2816013 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2816013' 00:05:25.200 killing process with pid 2816013 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2816013 00:05:25.200 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2816013 00:05:25.200 20:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.200 20:18:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.200 00:05:25.200 real 0m6.513s 00:05:25.200 user 0m6.407s 00:05:25.200 sys 0m0.509s 00:05:25.200 20:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.200 20:18:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.200 ************************************ 00:05:25.200 END TEST skip_rpc_with_json 00:05:25.200 ************************************ 00:05:25.461 20:18:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:25.461 20:18:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.461 20:18:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.461 20:18:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.461 ************************************ 00:05:25.461 START TEST skip_rpc_with_delay 00:05:25.461 ************************************ 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.461 [2024-05-13 20:18:41.240665] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:25.461 [2024-05-13 20:18:41.240743] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.461 00:05:25.461 real 0m0.067s 00:05:25.461 user 0m0.043s 00:05:25.461 sys 0m0.024s 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.461 20:18:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:25.461 ************************************ 00:05:25.461 END TEST skip_rpc_with_delay 00:05:25.461 ************************************ 00:05:25.461 20:18:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:25.461 20:18:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:25.461 20:18:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:25.461 20:18:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.461 20:18:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.461 20:18:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.461 ************************************ 00:05:25.461 START TEST exit_on_failed_rpc_init 00:05:25.461 ************************************ 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2817080 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2817080 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2817080 ']' 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.461 20:18:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.461 [2024-05-13 20:18:41.384260] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:25.461 [2024-05-13 20:18:41.384328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817080 ] 00:05:25.722 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.722 [2024-05-13 20:18:41.454052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.722 [2024-05-13 20:18:41.528224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.292 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.292 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.293 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.293 [2024-05-13 20:18:42.183434] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:26.293 [2024-05-13 20:18:42.183485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817271 ] 00:05:26.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.553 [2024-05-13 20:18:42.265445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.553 [2024-05-13 20:18:42.329551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.553 [2024-05-13 20:18:42.329613] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:26.553 [2024-05-13 20:18:42.329623] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:26.553 [2024-05-13 20:18:42.329630] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2817080 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2817080 ']' 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2817080 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2817080 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2817080' 00:05:26.553 killing process with pid 2817080 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2817080 00:05:26.553 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2817080 00:05:26.813 00:05:26.813 real 0m1.311s 00:05:26.813 user 0m1.519s 00:05:26.813 sys 0m0.366s 00:05:26.813 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.813 20:18:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.813 ************************************ 00:05:26.813 END TEST exit_on_failed_rpc_init 00:05:26.813 ************************************ 00:05:26.813 20:18:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.813 00:05:26.813 real 0m13.551s 00:05:26.813 user 0m13.168s 00:05:26.813 sys 0m1.394s 00:05:26.813 20:18:42 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.813 20:18:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.813 ************************************ 00:05:26.813 END TEST skip_rpc 00:05:26.813 ************************************ 00:05:26.814 20:18:42 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.814 20:18:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.814 20:18:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.814 20:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:26.814 ************************************ 00:05:26.814 START TEST rpc_client 00:05:26.814 ************************************ 00:05:26.814 20:18:42 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.075 * Looking for test storage... 00:05:27.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.075 20:18:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.075 OK 00:05:27.075 20:18:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.075 00:05:27.075 real 0m0.104s 00:05:27.075 user 0m0.047s 00:05:27.075 sys 0m0.066s 00:05:27.075 20:18:42 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.075 20:18:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.075 ************************************ 00:05:27.075 END TEST rpc_client 00:05:27.075 ************************************ 00:05:27.075 20:18:42 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.075 20:18:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.075 20:18:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.075 20:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.075 ************************************ 00:05:27.075 START TEST json_config 00:05:27.075 ************************************ 00:05:27.075 20:18:42 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.075 20:18:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.075 20:18:42 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.075 20:18:43 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.075 20:18:43 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.075 20:18:43 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.075 20:18:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.075 20:18:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.075 20:18:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.075 20:18:43 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.075 20:18:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@47 -- # : 0 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.075 20:18:43 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.075 20:18:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:27.076 INFO: JSON configuration test init 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:27.076 20:18:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.076 20:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:27.076 20:18:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.076 20:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.076 20:18:43 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:27.076 20:18:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:27.076 20:18:43 json_config -- json_config/common.sh@10 -- # shift 00:05:27.076 20:18:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.076 20:18:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.076 20:18:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.076 20:18:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.076 20:18:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.076 20:18:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2817528 00:05:27.076 20:18:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.076 Waiting for target to run... 00:05:27.076 20:18:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2817528 /var/tmp/spdk_tgt.sock 00:05:27.076 20:18:43 json_config -- common/autotest_common.sh@827 -- # '[' -z 2817528 ']' 00:05:27.076 20:18:43 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.336 20:18:43 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.336 20:18:43 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.336 20:18:43 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.336 20:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.336 20:18:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:27.336 [2024-05-13 20:18:43.067670] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:27.336 [2024-05-13 20:18:43.067724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817528 ] 00:05:27.336 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.597 [2024-05-13 20:18:43.300746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.597 [2024-05-13 20:18:43.352502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.169 20:18:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.169 20:18:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:28.169 20:18:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.169 00:05:28.169 20:18:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:28.169 20:18:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:28.169 20:18:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:28.169 20:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.169 20:18:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:28.169 20:18:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:28.169 20:18:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.169 20:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.169 20:18:43 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.169 20:18:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:28.169 20:18:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.741 20:18:44 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:28.741 20:18:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.742 20:18:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:28.742 20:18:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.742 20:18:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:28.742 20:18:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.742 20:18:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:28.742 20:18:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:28.742 20:18:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:28.742 20:18:44 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.742 20:18:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.002 MallocForNvmf0 00:05:29.002 20:18:44 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.002 20:18:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.002 MallocForNvmf1 00:05:29.002 20:18:44 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.002 20:18:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.262 [2024-05-13 20:18:45.075303] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.263 20:18:45 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.263 20:18:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.523 20:18:45 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.523 20:18:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.523 20:18:45 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.523 20:18:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.783 20:18:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.783 20:18:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.043 [2024-05-13 20:18:45.745079] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:30.043 [2024-05-13 20:18:45.745656] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.043 20:18:45 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:30.043 20:18:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.043 20:18:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.043 20:18:45 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:30.043 20:18:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.043 20:18:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.043 20:18:45 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:30.043 20:18:45 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.043 20:18:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.303 MallocBdevForConfigChangeCheck 00:05:30.303 20:18:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:30.303 20:18:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.303 20:18:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.303 20:18:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:30.303 20:18:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.563 20:18:46 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:30.563 INFO: shutting down applications... 00:05:30.563 20:18:46 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:30.563 20:18:46 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:30.563 20:18:46 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:30.563 20:18:46 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:30.823 Calling clear_iscsi_subsystem 00:05:30.823 Calling clear_nvmf_subsystem 00:05:30.823 Calling clear_nbd_subsystem 00:05:30.823 Calling clear_ublk_subsystem 00:05:30.823 Calling clear_vhost_blk_subsystem 00:05:30.823 Calling clear_vhost_scsi_subsystem 00:05:30.823 Calling clear_bdev_subsystem 00:05:30.823 20:18:46 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:30.823 20:18:46 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:30.823 20:18:46 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:30.823 20:18:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.823 20:18:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:30.823 20:18:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:31.394 20:18:47 json_config -- json_config/json_config.sh@345 -- # break 00:05:31.394 20:18:47 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:31.394 20:18:47 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:31.394 20:18:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:31.394 20:18:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.394 20:18:47 json_config -- json_config/common.sh@35 -- # [[ -n 2817528 ]] 00:05:31.394 20:18:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2817528 00:05:31.394 [2024-05-13 20:18:47.050407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:31.394 20:18:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.394 20:18:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.394 20:18:47 json_config -- json_config/common.sh@41 -- # kill -0 2817528 00:05:31.394 20:18:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.658 20:18:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.658 20:18:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.658 20:18:47 json_config -- json_config/common.sh@41 -- # kill -0 2817528 00:05:31.658 20:18:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:31.658 20:18:47 json_config -- json_config/common.sh@43 -- # break 00:05:31.658 20:18:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:31.658 20:18:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:31.658 SPDK target shutdown done 00:05:31.658 20:18:47 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:31.658 INFO: relaunching applications... 00:05:31.658 20:18:47 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.658 20:18:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.658 20:18:47 json_config -- json_config/common.sh@10 -- # shift 00:05:31.658 20:18:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.658 20:18:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.658 20:18:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.658 20:18:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.658 20:18:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.658 20:18:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2818659 00:05:31.658 20:18:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.658 Waiting for target to run... 00:05:31.658 20:18:47 json_config -- json_config/common.sh@25 -- # waitforlisten 2818659 /var/tmp/spdk_tgt.sock 00:05:31.658 20:18:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.658 20:18:47 json_config -- common/autotest_common.sh@827 -- # '[' -z 2818659 ']' 00:05:31.658 20:18:47 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.658 20:18:47 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.658 20:18:47 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.658 20:18:47 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.658 20:18:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.919 [2024-05-13 20:18:47.609182] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:31.919 [2024-05-13 20:18:47.609236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818659 ] 00:05:31.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.178 [2024-05-13 20:18:47.906382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.178 [2024-05-13 20:18:47.958492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.748 [2024-05-13 20:18:48.449295] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.748 [2024-05-13 20:18:48.481276] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:32.748 [2024-05-13 20:18:48.481835] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.748 20:18:48 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.748 20:18:48 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:32.748 20:18:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.748 00:05:32.748 20:18:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:32.748 20:18:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.748 INFO: Checking if target configuration is the same... 00:05:32.748 20:18:48 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.748 20:18:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:32.748 20:18:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.748 + '[' 2 -ne 2 ']' 00:05:32.748 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.748 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.748 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.748 +++ basename /dev/fd/62 00:05:32.748 ++ mktemp /tmp/62.XXX 00:05:32.748 + tmp_file_1=/tmp/62.Zkv 00:05:32.748 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.748 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.748 + tmp_file_2=/tmp/spdk_tgt_config.json.wuU 00:05:32.748 + ret=0 00:05:32.748 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.008 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.008 + diff -u /tmp/62.Zkv /tmp/spdk_tgt_config.json.wuU 00:05:33.008 + echo 'INFO: JSON config files are the same' 00:05:33.008 INFO: JSON config files are the same 00:05:33.008 + rm /tmp/62.Zkv /tmp/spdk_tgt_config.json.wuU 00:05:33.008 + exit 0 00:05:33.008 20:18:48 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:33.008 20:18:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:33.008 INFO: changing configuration and checking if this can be detected... 00:05:33.008 20:18:48 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.008 20:18:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.270 20:18:49 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.270 20:18:49 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:33.270 20:18:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.270 + '[' 2 -ne 2 ']' 00:05:33.270 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.270 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.270 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.270 +++ basename /dev/fd/62 00:05:33.270 ++ mktemp /tmp/62.XXX 00:05:33.270 + tmp_file_1=/tmp/62.Lok 00:05:33.270 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.270 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.270 + tmp_file_2=/tmp/spdk_tgt_config.json.sBr 00:05:33.270 + ret=0 00:05:33.270 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.531 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.532 + diff -u /tmp/62.Lok /tmp/spdk_tgt_config.json.sBr 00:05:33.532 + ret=1 00:05:33.532 + echo '=== Start of file: /tmp/62.Lok ===' 00:05:33.532 + cat /tmp/62.Lok 00:05:33.532 + echo '=== End of file: /tmp/62.Lok ===' 00:05:33.532 + echo '' 00:05:33.532 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sBr ===' 00:05:33.532 + cat /tmp/spdk_tgt_config.json.sBr 00:05:33.532 + echo '=== End of file: /tmp/spdk_tgt_config.json.sBr ===' 00:05:33.532 + echo '' 00:05:33.532 + rm /tmp/62.Lok /tmp/spdk_tgt_config.json.sBr 00:05:33.532 + exit 1 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:33.532 INFO: configuration change detected. 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@317 -- # [[ -n 2818659 ]] 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.532 20:18:49 json_config -- json_config/json_config.sh@323 -- # killprocess 2818659 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@946 -- # '[' -z 2818659 ']' 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@950 -- # kill -0 2818659 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@951 -- # uname 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.532 20:18:49 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2818659 00:05:33.792 20:18:49 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.792 20:18:49 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.792 20:18:49 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2818659' 00:05:33.792 killing process with pid 2818659 00:05:33.792 20:18:49 json_config -- common/autotest_common.sh@965 -- # kill 2818659 00:05:33.792 [2024-05-13 20:18:49.507872] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:33.792 20:18:49 json_config -- common/autotest_common.sh@970 -- # wait 2818659 00:05:34.054 20:18:49 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.054 20:18:49 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:34.054 20:18:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.054 20:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.054 20:18:49 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:34.054 20:18:49 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:34.054 INFO: Success 00:05:34.054 00:05:34.054 real 0m6.899s 00:05:34.054 user 0m8.488s 00:05:34.054 sys 0m1.612s 00:05:34.054 20:18:49 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.054 20:18:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.054 ************************************ 00:05:34.054 END TEST json_config 00:05:34.054 ************************************ 00:05:34.054 20:18:49 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.054 20:18:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.054 20:18:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.054 20:18:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.054 ************************************ 00:05:34.054 START TEST json_config_extra_key 00:05:34.054 ************************************ 00:05:34.054 20:18:49 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.054 20:18:49 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.054 20:18:49 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.054 20:18:49 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.054 20:18:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.054 20:18:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.054 20:18:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.054 20:18:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.054 20:18:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:34.054 20:18:49 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.054 INFO: launching applications... 00:05:34.054 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2819114 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.054 Waiting for target to run... 00:05:34.054 20:18:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2819114 /var/tmp/spdk_tgt.sock 00:05:34.055 20:18:49 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2819114 ']' 00:05:34.055 20:18:49 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.055 20:18:49 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.055 20:18:49 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.055 20:18:49 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.055 20:18:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.055 20:18:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.317 [2024-05-13 20:18:50.032607] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:34.317 [2024-05-13 20:18:50.032664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819114 ] 00:05:34.317 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.577 [2024-05-13 20:18:50.432098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.577 [2024-05-13 20:18:50.494123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.149 20:18:50 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.149 20:18:50 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.149 00:05:35.149 20:18:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.149 INFO: shutting down applications... 00:05:35.149 20:18:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2819114 ]] 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2819114 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819114 00:05:35.149 20:18:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819114 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.409 20:18:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.409 SPDK target shutdown done 00:05:35.409 20:18:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.409 Success 00:05:35.409 00:05:35.409 real 0m1.428s 00:05:35.409 user 0m0.991s 00:05:35.409 sys 0m0.484s 00:05:35.409 20:18:51 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.409 20:18:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.409 ************************************ 00:05:35.409 END TEST json_config_extra_key 00:05:35.409 ************************************ 00:05:35.670 20:18:51 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.670 20:18:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.670 20:18:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.670 20:18:51 -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 ************************************ 00:05:35.670 START TEST alias_rpc 00:05:35.670 ************************************ 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.670 * Looking for test storage... 00:05:35.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:35.670 20:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.670 20:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2819503 00:05:35.670 20:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2819503 00:05:35.670 20:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2819503 ']' 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.670 20:18:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 [2024-05-13 20:18:51.573018] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:35.670 [2024-05-13 20:18:51.573096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819503 ] 00:05:35.670 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.931 [2024-05-13 20:18:51.644164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.931 [2024-05-13 20:18:51.717813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.501 20:18:52 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.501 20:18:52 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:36.501 20:18:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:36.760 20:18:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2819503 00:05:36.760 20:18:52 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2819503 ']' 00:05:36.760 20:18:52 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2819503 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2819503 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2819503' 00:05:36.761 killing process with pid 2819503 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@965 -- # kill 2819503 00:05:36.761 20:18:52 alias_rpc -- common/autotest_common.sh@970 -- # wait 2819503 00:05:37.020 00:05:37.020 real 0m1.307s 00:05:37.020 user 0m1.414s 00:05:37.020 sys 0m0.350s 00:05:37.020 20:18:52 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.020 20:18:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.020 ************************************ 00:05:37.020 END TEST alias_rpc 00:05:37.020 ************************************ 00:05:37.020 20:18:52 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:37.020 20:18:52 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.020 20:18:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.020 20:18:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.020 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.020 ************************************ 00:05:37.021 START TEST spdkcli_tcp 00:05:37.021 ************************************ 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.021 * Looking for test storage... 00:05:37.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2819886 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2819886 00:05:37.021 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2819886 ']' 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.021 20:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.021 [2024-05-13 20:18:52.957468] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:37.021 [2024-05-13 20:18:52.957516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819886 ] 00:05:37.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.281 [2024-05-13 20:18:53.025232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.281 [2024-05-13 20:18:53.090428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.281 [2024-05-13 20:18:53.090515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.851 20:18:53 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.851 20:18:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:37.851 20:18:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:37.851 20:18:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2820032 00:05:37.851 20:18:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.112 [ 00:05:38.112 "bdev_malloc_delete", 00:05:38.112 "bdev_malloc_create", 00:05:38.112 "bdev_null_resize", 00:05:38.112 "bdev_null_delete", 00:05:38.112 "bdev_null_create", 00:05:38.112 "bdev_nvme_cuse_unregister", 00:05:38.112 "bdev_nvme_cuse_register", 00:05:38.112 "bdev_opal_new_user", 00:05:38.112 "bdev_opal_set_lock_state", 00:05:38.112 "bdev_opal_delete", 00:05:38.112 "bdev_opal_get_info", 00:05:38.112 "bdev_opal_create", 00:05:38.112 "bdev_nvme_opal_revert", 00:05:38.112 "bdev_nvme_opal_init", 00:05:38.112 "bdev_nvme_send_cmd", 00:05:38.112 "bdev_nvme_get_path_iostat", 00:05:38.112 "bdev_nvme_get_mdns_discovery_info", 00:05:38.112 "bdev_nvme_stop_mdns_discovery", 00:05:38.112 "bdev_nvme_start_mdns_discovery", 00:05:38.112 "bdev_nvme_set_multipath_policy", 00:05:38.112 "bdev_nvme_set_preferred_path", 00:05:38.112 "bdev_nvme_get_io_paths", 00:05:38.112 "bdev_nvme_remove_error_injection", 00:05:38.112 "bdev_nvme_add_error_injection", 00:05:38.112 "bdev_nvme_get_discovery_info", 00:05:38.112 "bdev_nvme_stop_discovery", 00:05:38.112 "bdev_nvme_start_discovery", 00:05:38.112 "bdev_nvme_get_controller_health_info", 00:05:38.112 "bdev_nvme_disable_controller", 00:05:38.112 "bdev_nvme_enable_controller", 00:05:38.112 "bdev_nvme_reset_controller", 00:05:38.112 "bdev_nvme_get_transport_statistics", 00:05:38.112 "bdev_nvme_apply_firmware", 00:05:38.112 "bdev_nvme_detach_controller", 00:05:38.112 "bdev_nvme_get_controllers", 00:05:38.112 "bdev_nvme_attach_controller", 00:05:38.112 "bdev_nvme_set_hotplug", 00:05:38.112 "bdev_nvme_set_options", 00:05:38.112 "bdev_passthru_delete", 00:05:38.112 "bdev_passthru_create", 00:05:38.112 "bdev_lvol_check_shallow_copy", 00:05:38.112 "bdev_lvol_start_shallow_copy", 00:05:38.112 "bdev_lvol_grow_lvstore", 00:05:38.112 "bdev_lvol_get_lvols", 00:05:38.112 "bdev_lvol_get_lvstores", 00:05:38.112 "bdev_lvol_delete", 00:05:38.112 "bdev_lvol_set_read_only", 00:05:38.112 "bdev_lvol_resize", 00:05:38.112 "bdev_lvol_decouple_parent", 00:05:38.112 "bdev_lvol_inflate", 00:05:38.112 "bdev_lvol_rename", 00:05:38.112 "bdev_lvol_clone_bdev", 00:05:38.112 "bdev_lvol_clone", 00:05:38.112 "bdev_lvol_snapshot", 00:05:38.112 "bdev_lvol_create", 00:05:38.112 "bdev_lvol_delete_lvstore", 00:05:38.112 "bdev_lvol_rename_lvstore", 00:05:38.112 "bdev_lvol_create_lvstore", 00:05:38.112 "bdev_raid_set_options", 00:05:38.112 "bdev_raid_remove_base_bdev", 00:05:38.112 "bdev_raid_add_base_bdev", 00:05:38.112 "bdev_raid_delete", 00:05:38.112 "bdev_raid_create", 00:05:38.112 "bdev_raid_get_bdevs", 00:05:38.112 "bdev_error_inject_error", 00:05:38.112 "bdev_error_delete", 00:05:38.112 "bdev_error_create", 00:05:38.112 "bdev_split_delete", 00:05:38.112 "bdev_split_create", 00:05:38.112 "bdev_delay_delete", 00:05:38.112 "bdev_delay_create", 00:05:38.112 "bdev_delay_update_latency", 00:05:38.112 "bdev_zone_block_delete", 00:05:38.112 "bdev_zone_block_create", 00:05:38.112 "blobfs_create", 00:05:38.112 "blobfs_detect", 00:05:38.112 "blobfs_set_cache_size", 00:05:38.112 "bdev_aio_delete", 00:05:38.112 "bdev_aio_rescan", 00:05:38.112 "bdev_aio_create", 00:05:38.112 "bdev_ftl_set_property", 00:05:38.112 "bdev_ftl_get_properties", 00:05:38.112 "bdev_ftl_get_stats", 00:05:38.112 "bdev_ftl_unmap", 00:05:38.112 "bdev_ftl_unload", 00:05:38.112 "bdev_ftl_delete", 00:05:38.112 "bdev_ftl_load", 00:05:38.112 "bdev_ftl_create", 00:05:38.112 "bdev_virtio_attach_controller", 00:05:38.112 "bdev_virtio_scsi_get_devices", 00:05:38.112 "bdev_virtio_detach_controller", 00:05:38.112 "bdev_virtio_blk_set_hotplug", 00:05:38.112 "bdev_iscsi_delete", 00:05:38.112 "bdev_iscsi_create", 00:05:38.112 "bdev_iscsi_set_options", 00:05:38.112 "accel_error_inject_error", 00:05:38.112 "ioat_scan_accel_module", 00:05:38.112 "dsa_scan_accel_module", 00:05:38.112 "iaa_scan_accel_module", 00:05:38.112 "keyring_file_remove_key", 00:05:38.112 "keyring_file_add_key", 00:05:38.112 "iscsi_get_histogram", 00:05:38.112 "iscsi_enable_histogram", 00:05:38.112 "iscsi_set_options", 00:05:38.112 "iscsi_get_auth_groups", 00:05:38.112 "iscsi_auth_group_remove_secret", 00:05:38.112 "iscsi_auth_group_add_secret", 00:05:38.112 "iscsi_delete_auth_group", 00:05:38.112 "iscsi_create_auth_group", 00:05:38.112 "iscsi_set_discovery_auth", 00:05:38.112 "iscsi_get_options", 00:05:38.112 "iscsi_target_node_request_logout", 00:05:38.112 "iscsi_target_node_set_redirect", 00:05:38.112 "iscsi_target_node_set_auth", 00:05:38.112 "iscsi_target_node_add_lun", 00:05:38.112 "iscsi_get_stats", 00:05:38.112 "iscsi_get_connections", 00:05:38.112 "iscsi_portal_group_set_auth", 00:05:38.112 "iscsi_start_portal_group", 00:05:38.112 "iscsi_delete_portal_group", 00:05:38.112 "iscsi_create_portal_group", 00:05:38.112 "iscsi_get_portal_groups", 00:05:38.112 "iscsi_delete_target_node", 00:05:38.112 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.112 "iscsi_target_node_add_pg_ig_maps", 00:05:38.112 "iscsi_create_target_node", 00:05:38.112 "iscsi_get_target_nodes", 00:05:38.112 "iscsi_delete_initiator_group", 00:05:38.112 "iscsi_initiator_group_remove_initiators", 00:05:38.112 "iscsi_initiator_group_add_initiators", 00:05:38.112 "iscsi_create_initiator_group", 00:05:38.112 "iscsi_get_initiator_groups", 00:05:38.112 "nvmf_set_crdt", 00:05:38.112 "nvmf_set_config", 00:05:38.112 "nvmf_set_max_subsystems", 00:05:38.112 "nvmf_subsystem_get_listeners", 00:05:38.112 "nvmf_subsystem_get_qpairs", 00:05:38.112 "nvmf_subsystem_get_controllers", 00:05:38.112 "nvmf_get_stats", 00:05:38.112 "nvmf_get_transports", 00:05:38.112 "nvmf_create_transport", 00:05:38.112 "nvmf_get_targets", 00:05:38.112 "nvmf_delete_target", 00:05:38.112 "nvmf_create_target", 00:05:38.112 "nvmf_subsystem_allow_any_host", 00:05:38.112 "nvmf_subsystem_remove_host", 00:05:38.112 "nvmf_subsystem_add_host", 00:05:38.112 "nvmf_ns_remove_host", 00:05:38.112 "nvmf_ns_add_host", 00:05:38.112 "nvmf_subsystem_remove_ns", 00:05:38.112 "nvmf_subsystem_add_ns", 00:05:38.112 "nvmf_subsystem_listener_set_ana_state", 00:05:38.112 "nvmf_discovery_get_referrals", 00:05:38.112 "nvmf_discovery_remove_referral", 00:05:38.112 "nvmf_discovery_add_referral", 00:05:38.112 "nvmf_subsystem_remove_listener", 00:05:38.112 "nvmf_subsystem_add_listener", 00:05:38.112 "nvmf_delete_subsystem", 00:05:38.112 "nvmf_create_subsystem", 00:05:38.112 "nvmf_get_subsystems", 00:05:38.112 "env_dpdk_get_mem_stats", 00:05:38.113 "nbd_get_disks", 00:05:38.113 "nbd_stop_disk", 00:05:38.113 "nbd_start_disk", 00:05:38.113 "ublk_recover_disk", 00:05:38.113 "ublk_get_disks", 00:05:38.113 "ublk_stop_disk", 00:05:38.113 "ublk_start_disk", 00:05:38.113 "ublk_destroy_target", 00:05:38.113 "ublk_create_target", 00:05:38.113 "virtio_blk_create_transport", 00:05:38.113 "virtio_blk_get_transports", 00:05:38.113 "vhost_controller_set_coalescing", 00:05:38.113 "vhost_get_controllers", 00:05:38.113 "vhost_delete_controller", 00:05:38.113 "vhost_create_blk_controller", 00:05:38.113 "vhost_scsi_controller_remove_target", 00:05:38.113 "vhost_scsi_controller_add_target", 00:05:38.113 "vhost_start_scsi_controller", 00:05:38.113 "vhost_create_scsi_controller", 00:05:38.113 "thread_set_cpumask", 00:05:38.113 "framework_get_scheduler", 00:05:38.113 "framework_set_scheduler", 00:05:38.113 "framework_get_reactors", 00:05:38.113 "thread_get_io_channels", 00:05:38.113 "thread_get_pollers", 00:05:38.113 "thread_get_stats", 00:05:38.113 "framework_monitor_context_switch", 00:05:38.113 "spdk_kill_instance", 00:05:38.113 "log_enable_timestamps", 00:05:38.113 "log_get_flags", 00:05:38.113 "log_clear_flag", 00:05:38.113 "log_set_flag", 00:05:38.113 "log_get_level", 00:05:38.113 "log_set_level", 00:05:38.113 "log_get_print_level", 00:05:38.113 "log_set_print_level", 00:05:38.113 "framework_enable_cpumask_locks", 00:05:38.113 "framework_disable_cpumask_locks", 00:05:38.113 "framework_wait_init", 00:05:38.113 "framework_start_init", 00:05:38.113 "scsi_get_devices", 00:05:38.113 "bdev_get_histogram", 00:05:38.113 "bdev_enable_histogram", 00:05:38.113 "bdev_set_qos_limit", 00:05:38.113 "bdev_set_qd_sampling_period", 00:05:38.113 "bdev_get_bdevs", 00:05:38.113 "bdev_reset_iostat", 00:05:38.113 "bdev_get_iostat", 00:05:38.113 "bdev_examine", 00:05:38.113 "bdev_wait_for_examine", 00:05:38.113 "bdev_set_options", 00:05:38.113 "notify_get_notifications", 00:05:38.113 "notify_get_types", 00:05:38.113 "accel_get_stats", 00:05:38.113 "accel_set_options", 00:05:38.113 "accel_set_driver", 00:05:38.113 "accel_crypto_key_destroy", 00:05:38.113 "accel_crypto_keys_get", 00:05:38.113 "accel_crypto_key_create", 00:05:38.113 "accel_assign_opc", 00:05:38.113 "accel_get_module_info", 00:05:38.113 "accel_get_opc_assignments", 00:05:38.113 "vmd_rescan", 00:05:38.113 "vmd_remove_device", 00:05:38.113 "vmd_enable", 00:05:38.113 "sock_get_default_impl", 00:05:38.113 "sock_set_default_impl", 00:05:38.113 "sock_impl_set_options", 00:05:38.113 "sock_impl_get_options", 00:05:38.113 "iobuf_get_stats", 00:05:38.113 "iobuf_set_options", 00:05:38.113 "framework_get_pci_devices", 00:05:38.113 "framework_get_config", 00:05:38.113 "framework_get_subsystems", 00:05:38.113 "trace_get_info", 00:05:38.113 "trace_get_tpoint_group_mask", 00:05:38.113 "trace_disable_tpoint_group", 00:05:38.113 "trace_enable_tpoint_group", 00:05:38.113 "trace_clear_tpoint_mask", 00:05:38.113 "trace_set_tpoint_mask", 00:05:38.113 "keyring_get_keys", 00:05:38.113 "spdk_get_version", 00:05:38.113 "rpc_get_methods" 00:05:38.113 ] 00:05:38.113 20:18:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.113 20:18:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.113 20:18:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2819886 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2819886 ']' 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2819886 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2819886 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2819886' 00:05:38.113 killing process with pid 2819886 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2819886 00:05:38.113 20:18:53 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2819886 00:05:38.373 00:05:38.373 real 0m1.371s 00:05:38.373 user 0m2.525s 00:05:38.373 sys 0m0.393s 00:05:38.373 20:18:54 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.373 20:18:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.373 ************************************ 00:05:38.373 END TEST spdkcli_tcp 00:05:38.373 ************************************ 00:05:38.373 20:18:54 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.373 20:18:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.373 20:18:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.373 20:18:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.373 ************************************ 00:05:38.373 START TEST dpdk_mem_utility 00:05:38.373 ************************************ 00:05:38.373 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.634 * Looking for test storage... 00:05:38.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.634 20:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.634 20:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2820289 00:05:38.634 20:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2820289 00:05:38.634 20:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.634 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2820289 ']' 00:05:38.634 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.634 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.634 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.634 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.634 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.634 [2024-05-13 20:18:54.394468] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:38.634 [2024-05-13 20:18:54.394539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820289 ] 00:05:38.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.634 [2024-05-13 20:18:54.465148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.634 [2024-05-13 20:18:54.539214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.575 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.575 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:39.575 20:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.575 20:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.575 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.575 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.575 { 00:05:39.575 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.575 } 00:05:39.575 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.575 20:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.575 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:39.575 1 heaps totaling size 814.000000 MiB 00:05:39.575 size: 814.000000 MiB heap id: 0 00:05:39.575 end heaps---------- 00:05:39.575 8 mempools totaling size 598.116089 MiB 00:05:39.575 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.575 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.575 size: 84.521057 MiB name: bdev_io_2820289 00:05:39.575 size: 51.011292 MiB name: evtpool_2820289 00:05:39.576 size: 50.003479 MiB name: msgpool_2820289 00:05:39.576 size: 21.763794 MiB name: PDU_Pool 00:05:39.576 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.576 size: 0.026123 MiB name: Session_Pool 00:05:39.576 end mempools------- 00:05:39.576 6 memzones totaling size 4.142822 MiB 00:05:39.576 size: 1.000366 MiB name: RG_ring_0_2820289 00:05:39.576 size: 1.000366 MiB name: RG_ring_1_2820289 00:05:39.576 size: 1.000366 MiB name: RG_ring_4_2820289 00:05:39.576 size: 1.000366 MiB name: RG_ring_5_2820289 00:05:39.576 size: 0.125366 MiB name: RG_ring_2_2820289 00:05:39.576 size: 0.015991 MiB name: RG_ring_3_2820289 00:05:39.576 end memzones------- 00:05:39.576 20:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.576 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:39.576 list of free elements. size: 12.519348 MiB 00:05:39.576 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:39.576 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:39.576 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:39.576 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:39.576 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:39.576 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:39.576 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:39.576 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:39.576 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:39.576 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:39.576 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:39.576 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:39.576 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:39.576 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:39.576 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:39.576 list of standard malloc elements. size: 199.218079 MiB 00:05:39.576 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:39.576 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:39.576 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:39.576 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:39.576 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:39.576 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:39.576 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:39.576 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:39.576 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:39.576 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:39.576 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:39.576 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:39.576 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:39.576 list of memzone associated elements. size: 602.262573 MiB 00:05:39.576 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:39.576 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.576 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:39.576 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.576 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:39.576 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2820289_0 00:05:39.576 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:39.576 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2820289_0 00:05:39.576 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:39.576 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2820289_0 00:05:39.576 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:39.576 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.576 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:39.576 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.576 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:39.576 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2820289 00:05:39.576 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:39.576 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2820289 00:05:39.576 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:39.576 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2820289 00:05:39.576 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:39.576 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.576 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:39.576 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.576 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:39.576 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.576 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:39.576 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.576 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:39.576 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2820289 00:05:39.576 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:39.576 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2820289 00:05:39.576 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:39.576 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2820289 00:05:39.576 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:39.576 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2820289 00:05:39.576 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:39.576 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2820289 00:05:39.576 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:39.576 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.576 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:39.576 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.576 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:39.576 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.576 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:39.576 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2820289 00:05:39.576 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:39.576 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.576 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:39.576 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.576 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:39.576 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2820289 00:05:39.576 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:39.576 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.576 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:39.576 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2820289 00:05:39.576 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:39.576 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2820289 00:05:39.576 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:39.576 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.576 20:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.576 20:18:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2820289 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2820289 ']' 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2820289 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2820289 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2820289' 00:05:39.576 killing process with pid 2820289 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2820289 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2820289 00:05:39.576 00:05:39.576 real 0m1.277s 00:05:39.576 user 0m1.364s 00:05:39.576 sys 0m0.353s 00:05:39.576 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.837 20:18:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.837 ************************************ 00:05:39.837 END TEST dpdk_mem_utility 00:05:39.837 ************************************ 00:05:39.837 20:18:55 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.837 20:18:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.837 20:18:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.837 20:18:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.837 ************************************ 00:05:39.837 START TEST event 00:05:39.837 ************************************ 00:05:39.837 20:18:55 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.837 * Looking for test storage... 00:05:39.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.837 20:18:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.837 20:18:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.837 20:18:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.837 20:18:55 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:39.837 20:18:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.837 20:18:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.837 ************************************ 00:05:39.837 START TEST event_perf 00:05:39.838 ************************************ 00:05:39.838 20:18:55 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.838 Running I/O for 1 seconds...[2024-05-13 20:18:55.760178] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:39.838 [2024-05-13 20:18:55.760278] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820665 ] 00:05:40.097 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.097 [2024-05-13 20:18:55.838626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.097 [2024-05-13 20:18:55.914447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.097 [2024-05-13 20:18:55.914648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.097 [2024-05-13 20:18:55.914772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.097 [2024-05-13 20:18:55.914776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.038 Running I/O for 1 seconds... 00:05:41.038 lcore 0: 166125 00:05:41.038 lcore 1: 166127 00:05:41.038 lcore 2: 166125 00:05:41.038 lcore 3: 166128 00:05:41.038 done. 00:05:41.038 00:05:41.038 real 0m1.230s 00:05:41.038 user 0m4.131s 00:05:41.038 sys 0m0.097s 00:05:41.038 20:18:56 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.038 20:18:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.038 ************************************ 00:05:41.038 END TEST event_perf 00:05:41.038 ************************************ 00:05:41.299 20:18:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.299 20:18:57 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:41.299 20:18:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.299 20:18:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.299 ************************************ 00:05:41.299 START TEST event_reactor 00:05:41.299 ************************************ 00:05:41.299 20:18:57 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.299 [2024-05-13 20:18:57.075455] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:41.299 [2024-05-13 20:18:57.075545] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820837 ] 00:05:41.299 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.299 [2024-05-13 20:18:57.146270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.299 [2024-05-13 20:18:57.213378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.680 test_start 00:05:42.680 oneshot 00:05:42.680 tick 100 00:05:42.680 tick 100 00:05:42.680 tick 250 00:05:42.680 tick 100 00:05:42.680 tick 100 00:05:42.680 tick 250 00:05:42.680 tick 100 00:05:42.680 tick 500 00:05:42.680 tick 100 00:05:42.680 tick 100 00:05:42.680 tick 250 00:05:42.680 tick 100 00:05:42.680 tick 100 00:05:42.680 test_end 00:05:42.680 00:05:42.680 real 0m1.211s 00:05:42.680 user 0m1.131s 00:05:42.680 sys 0m0.075s 00:05:42.680 20:18:58 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.680 20:18:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.680 ************************************ 00:05:42.680 END TEST event_reactor 00:05:42.680 ************************************ 00:05:42.680 20:18:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.680 20:18:58 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:42.680 20:18:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.680 20:18:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.680 ************************************ 00:05:42.680 START TEST event_reactor_perf 00:05:42.680 ************************************ 00:05:42.680 20:18:58 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.680 [2024-05-13 20:18:58.370580] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:42.680 [2024-05-13 20:18:58.370671] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821070 ] 00:05:42.680 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.680 [2024-05-13 20:18:58.441746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.680 [2024-05-13 20:18:58.512082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.621 test_start 00:05:43.621 test_end 00:05:43.621 Performance: 367408 events per second 00:05:43.621 00:05:43.621 real 0m1.215s 00:05:43.621 user 0m1.130s 00:05:43.621 sys 0m0.081s 00:05:43.621 20:18:59 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.621 20:18:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.621 ************************************ 00:05:43.621 END TEST event_reactor_perf 00:05:43.621 ************************************ 00:05:43.882 20:18:59 event -- event/event.sh@49 -- # uname -s 00:05:43.882 20:18:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.882 20:18:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.882 20:18:59 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.882 20:18:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.882 20:18:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.882 ************************************ 00:05:43.882 START TEST event_scheduler 00:05:43.882 ************************************ 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.882 * Looking for test storage... 00:05:43.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:43.882 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.882 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2821456 00:05:43.882 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.882 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.882 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2821456 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2821456 ']' 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.882 20:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.882 [2024-05-13 20:18:59.801697] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:43.882 [2024-05-13 20:18:59.801761] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821456 ] 00:05:44.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.142 [2024-05-13 20:18:59.866897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.142 [2024-05-13 20:18:59.930484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.142 [2024-05-13 20:18:59.930786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.142 [2024-05-13 20:18:59.930893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.142 [2024-05-13 20:18:59.930890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:44.712 20:19:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.712 POWER: Env isn't set yet! 00:05:44.712 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:44.712 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.712 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.712 POWER: Attempting to initialise PSTAT power management... 00:05:44.712 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:44.712 POWER: Initialized successfully for lcore 0 power management 00:05:44.712 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:44.712 POWER: Initialized successfully for lcore 1 power management 00:05:44.712 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:44.712 POWER: Initialized successfully for lcore 2 power management 00:05:44.712 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:44.712 POWER: Initialized successfully for lcore 3 power management 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.712 20:19:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.712 20:19:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.973 [2024-05-13 20:19:00.700928] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:44.973 20:19:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.973 20:19:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:44.973 20:19:00 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.973 20:19:00 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.973 20:19:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.973 ************************************ 00:05:44.973 START TEST scheduler_create_thread 00:05:44.973 ************************************ 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.973 2 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.973 3 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.973 4 00:05:44.973 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.974 5 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.974 6 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.974 7 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.974 8 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.972 9 00:05:45.972 20:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.972 20:19:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.972 20:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.972 20:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.357 10 00:05:47.357 20:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.357 20:19:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.357 20:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.357 20:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.929 20:19:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.929 20:19:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:47.929 20:19:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:47.929 20:19:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.929 20:19:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 20:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.510 20:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.510 20:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.510 20:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.081 20:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.081 20:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.081 20:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.081 20:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.081 20:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.653 20:19:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.653 00:05:49.653 real 0m4.566s 00:05:49.654 user 0m0.025s 00:05:49.654 sys 0m0.005s 00:05:49.654 20:19:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.654 20:19:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.654 ************************************ 00:05:49.654 END TEST scheduler_create_thread 00:05:49.654 ************************************ 00:05:49.654 20:19:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.654 20:19:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2821456 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2821456 ']' 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2821456 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2821456 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2821456' 00:05:49.654 killing process with pid 2821456 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2821456 00:05:49.654 20:19:05 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2821456 00:05:49.654 [2024-05-13 20:19:05.542332] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:49.914 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:49.914 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:49.914 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:49.914 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:49.914 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:49.915 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:49.915 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:49.915 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:49.915 00:05:49.915 real 0m6.092s 00:05:49.915 user 0m14.736s 00:05:49.915 sys 0m0.363s 00:05:49.915 20:19:05 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.915 20:19:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.915 ************************************ 00:05:49.915 END TEST event_scheduler 00:05:49.915 ************************************ 00:05:49.915 20:19:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:49.915 20:19:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:49.915 20:19:05 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.915 20:19:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.915 20:19:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.915 ************************************ 00:05:49.915 START TEST app_repeat 00:05:49.915 ************************************ 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2822841 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2822841' 00:05:49.915 Process app_repeat pid: 2822841 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:49.915 spdk_app_start Round 0 00:05:49.915 20:19:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822841 /var/tmp/spdk-nbd.sock 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2822841 ']' 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.915 20:19:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.176 [2024-05-13 20:19:05.876157] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:50.176 [2024-05-13 20:19:05.876224] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822841 ] 00:05:50.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.176 [2024-05-13 20:19:05.946843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.176 [2024-05-13 20:19:06.021795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.176 [2024-05-13 20:19:06.021798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.747 20:19:06 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.747 20:19:06 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:50.747 20:19:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.009 Malloc0 00:05:51.009 20:19:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.270 Malloc1 00:05:51.270 20:19:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.270 /dev/nbd0 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.270 20:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:51.270 20:19:07 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.270 1+0 records in 00:05:51.270 1+0 records out 00:05:51.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214003 s, 19.1 MB/s 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.531 /dev/nbd1 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.531 1+0 records in 00:05:51.531 1+0 records out 00:05:51.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301851 s, 13.6 MB/s 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:51.531 20:19:07 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.531 20:19:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.793 { 00:05:51.793 "nbd_device": "/dev/nbd0", 00:05:51.793 "bdev_name": "Malloc0" 00:05:51.793 }, 00:05:51.793 { 00:05:51.793 "nbd_device": "/dev/nbd1", 00:05:51.793 "bdev_name": "Malloc1" 00:05:51.793 } 00:05:51.793 ]' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.793 { 00:05:51.793 "nbd_device": "/dev/nbd0", 00:05:51.793 "bdev_name": "Malloc0" 00:05:51.793 }, 00:05:51.793 { 00:05:51.793 "nbd_device": "/dev/nbd1", 00:05:51.793 "bdev_name": "Malloc1" 00:05:51.793 } 00:05:51.793 ]' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.793 /dev/nbd1' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.793 /dev/nbd1' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.793 256+0 records in 00:05:51.793 256+0 records out 00:05:51.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115906 s, 90.5 MB/s 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.793 256+0 records in 00:05:51.793 256+0 records out 00:05:51.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155872 s, 67.3 MB/s 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.793 256+0 records in 00:05:51.793 256+0 records out 00:05:51.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173002 s, 60.6 MB/s 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.793 20:19:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.054 20:19:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.054 20:19:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.054 20:19:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.054 20:19:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.055 20:19:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.055 20:19:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.055 20:19:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.055 20:19:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.055 20:19:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.055 20:19:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.316 20:19:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.316 20:19:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.577 20:19:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.838 [2024-05-13 20:19:08.546161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.838 [2024-05-13 20:19:08.609215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.838 [2024-05-13 20:19:08.609218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.838 [2024-05-13 20:19:08.640946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.838 [2024-05-13 20:19:08.640985] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.138 20:19:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.138 20:19:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.138 spdk_app_start Round 1 00:05:56.138 20:19:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822841 /var/tmp/spdk-nbd.sock 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2822841 ']' 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.138 20:19:11 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:56.138 20:19:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.138 Malloc0 00:05:56.138 20:19:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.138 Malloc1 00:05:56.138 20:19:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.138 20:19:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.138 /dev/nbd0 00:05:56.138 20:19:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.138 20:19:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.138 1+0 records in 00:05:56.138 1+0 records out 00:05:56.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230528 s, 17.8 MB/s 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.138 20:19:12 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:56.138 20:19:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.138 20:19:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.138 20:19:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.398 /dev/nbd1 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.398 1+0 records in 00:05:56.398 1+0 records out 00:05:56.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032892 s, 12.5 MB/s 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.398 20:19:12 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.398 20:19:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.658 { 00:05:56.658 "nbd_device": "/dev/nbd0", 00:05:56.658 "bdev_name": "Malloc0" 00:05:56.658 }, 00:05:56.658 { 00:05:56.658 "nbd_device": "/dev/nbd1", 00:05:56.658 "bdev_name": "Malloc1" 00:05:56.658 } 00:05:56.658 ]' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.658 { 00:05:56.658 "nbd_device": "/dev/nbd0", 00:05:56.658 "bdev_name": "Malloc0" 00:05:56.658 }, 00:05:56.658 { 00:05:56.658 "nbd_device": "/dev/nbd1", 00:05:56.658 "bdev_name": "Malloc1" 00:05:56.658 } 00:05:56.658 ]' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.658 /dev/nbd1' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.658 /dev/nbd1' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.658 256+0 records in 00:05:56.658 256+0 records out 00:05:56.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115655 s, 90.7 MB/s 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.658 256+0 records in 00:05:56.658 256+0 records out 00:05:56.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160468 s, 65.3 MB/s 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.658 256+0 records in 00:05:56.658 256+0 records out 00:05:56.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018127 s, 57.8 MB/s 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.658 20:19:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.919 20:19:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.179 20:19:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.179 20:19:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.179 20:19:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.179 20:19:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.179 20:19:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.179 20:19:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.179 20:19:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.179 20:19:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.439 20:19:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.439 [2024-05-13 20:19:13.377609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.699 [2024-05-13 20:19:13.440011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.699 [2024-05-13 20:19:13.440014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.699 [2024-05-13 20:19:13.472642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.699 [2024-05-13 20:19:13.472679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.994 20:19:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.994 20:19:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.994 spdk_app_start Round 2 00:06:00.994 20:19:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822841 /var/tmp/spdk-nbd.sock 00:06:00.994 20:19:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2822841 ']' 00:06:00.994 20:19:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.994 20:19:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.994 20:19:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:00.995 20:19:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.995 Malloc0 00:06:00.995 20:19:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.995 Malloc1 00:06:00.995 20:19:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.995 /dev/nbd0 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.995 1+0 records in 00:06:00.995 1+0 records out 00:06:00.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267962 s, 15.3 MB/s 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:00.995 20:19:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.995 20:19:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.254 /dev/nbd1 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.254 1+0 records in 00:06:01.254 1+0 records out 00:06:01.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278171 s, 14.7 MB/s 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.254 20:19:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.254 20:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.515 { 00:06:01.515 "nbd_device": "/dev/nbd0", 00:06:01.515 "bdev_name": "Malloc0" 00:06:01.515 }, 00:06:01.515 { 00:06:01.515 "nbd_device": "/dev/nbd1", 00:06:01.515 "bdev_name": "Malloc1" 00:06:01.515 } 00:06:01.515 ]' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.515 { 00:06:01.515 "nbd_device": "/dev/nbd0", 00:06:01.515 "bdev_name": "Malloc0" 00:06:01.515 }, 00:06:01.515 { 00:06:01.515 "nbd_device": "/dev/nbd1", 00:06:01.515 "bdev_name": "Malloc1" 00:06:01.515 } 00:06:01.515 ]' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.515 /dev/nbd1' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.515 /dev/nbd1' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.515 256+0 records in 00:06:01.515 256+0 records out 00:06:01.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117714 s, 89.1 MB/s 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.515 256+0 records in 00:06:01.515 256+0 records out 00:06:01.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159099 s, 65.9 MB/s 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.515 256+0 records in 00:06:01.515 256+0 records out 00:06:01.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167068 s, 62.8 MB/s 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.515 20:19:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.775 20:19:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.035 20:19:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.036 20:19:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.036 20:19:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.296 20:19:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.296 [2024-05-13 20:19:18.234840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.556 [2024-05-13 20:19:18.297457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.556 [2024-05-13 20:19:18.297547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.556 [2024-05-13 20:19:18.329333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.556 [2024-05-13 20:19:18.329369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.854 20:19:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2822841 /var/tmp/spdk-nbd.sock 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2822841 ']' 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:05.854 20:19:21 event.app_repeat -- event/event.sh@39 -- # killprocess 2822841 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2822841 ']' 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2822841 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2822841 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2822841' 00:06:05.854 killing process with pid 2822841 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2822841 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2822841 00:06:05.854 spdk_app_start is called in Round 0. 00:06:05.854 Shutdown signal received, stop current app iteration 00:06:05.854 Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 reinitialization... 00:06:05.854 spdk_app_start is called in Round 1. 00:06:05.854 Shutdown signal received, stop current app iteration 00:06:05.854 Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 reinitialization... 00:06:05.854 spdk_app_start is called in Round 2. 00:06:05.854 Shutdown signal received, stop current app iteration 00:06:05.854 Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 reinitialization... 00:06:05.854 spdk_app_start is called in Round 3. 00:06:05.854 Shutdown signal received, stop current app iteration 00:06:05.854 20:19:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.854 20:19:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.854 00:06:05.854 real 0m15.576s 00:06:05.854 user 0m33.449s 00:06:05.854 sys 0m2.137s 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.854 20:19:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 ************************************ 00:06:05.854 END TEST app_repeat 00:06:05.854 ************************************ 00:06:05.854 20:19:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.854 20:19:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.854 20:19:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.854 20:19:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.854 20:19:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 ************************************ 00:06:05.854 START TEST cpu_locks 00:06:05.854 ************************************ 00:06:05.854 20:19:21 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.854 * Looking for test storage... 00:06:05.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.854 20:19:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.854 20:19:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.854 20:19:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.854 20:19:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.854 20:19:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.854 20:19:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.854 20:19:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 ************************************ 00:06:05.854 START TEST default_locks 00:06:05.854 ************************************ 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2826097 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2826097 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2826097 ']' 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.854 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 [2024-05-13 20:19:21.683203] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:05.854 [2024-05-13 20:19:21.683256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826097 ] 00:06:05.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.854 [2024-05-13 20:19:21.751891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.114 [2024-05-13 20:19:21.825359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.684 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.684 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:06.684 20:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2826097 00:06:06.684 20:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2826097 00:06:06.684 20:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.254 lslocks: write error 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2826097 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2826097 ']' 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2826097 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2826097 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2826097' 00:06:07.255 killing process with pid 2826097 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2826097 00:06:07.255 20:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2826097 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2826097 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2826097 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2826097 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2826097 ']' 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2826097) - No such process 00:06:07.255 ERROR: process (pid: 2826097) is no longer running 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.255 00:06:07.255 real 0m1.541s 00:06:07.255 user 0m1.631s 00:06:07.255 sys 0m0.529s 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.255 20:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.255 ************************************ 00:06:07.255 END TEST default_locks 00:06:07.255 ************************************ 00:06:07.517 20:19:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.517 20:19:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.517 20:19:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.517 20:19:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.517 ************************************ 00:06:07.517 START TEST default_locks_via_rpc 00:06:07.517 ************************************ 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2826463 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2826463 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2826463 ']' 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.517 20:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.517 [2024-05-13 20:19:23.302421] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:07.517 [2024-05-13 20:19:23.302474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826463 ] 00:06:07.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.517 [2024-05-13 20:19:23.371059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.517 [2024-05-13 20:19:23.445683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.458 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2826463 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2826463 00:06:08.459 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.719 20:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2826463 00:06:08.719 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2826463 ']' 00:06:08.719 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2826463 00:06:08.719 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:08.719 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.719 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2826463 00:06:08.981 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.981 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.981 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2826463' 00:06:08.981 killing process with pid 2826463 00:06:08.981 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2826463 00:06:08.981 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2826463 00:06:08.981 00:06:08.981 real 0m1.638s 00:06:08.981 user 0m1.735s 00:06:08.981 sys 0m0.549s 00:06:08.982 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.982 20:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.982 ************************************ 00:06:08.982 END TEST default_locks_via_rpc 00:06:08.982 ************************************ 00:06:09.243 20:19:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.243 20:19:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.243 20:19:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.243 20:19:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.243 ************************************ 00:06:09.243 START TEST non_locking_app_on_locked_coremask 00:06:09.243 ************************************ 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2826830 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2826830 /var/tmp/spdk.sock 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2826830 ']' 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.243 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.243 [2024-05-13 20:19:25.019342] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:09.243 [2024-05-13 20:19:25.019395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826830 ] 00:06:09.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.243 [2024-05-13 20:19:25.086994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.243 [2024-05-13 20:19:25.158185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2827055 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2827055 /var/tmp/spdk2.sock 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2827055 ']' 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.186 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.186 [2024-05-13 20:19:25.841547] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:10.186 [2024-05-13 20:19:25.841612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827055 ] 00:06:10.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.186 [2024-05-13 20:19:25.941612] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.186 [2024-05-13 20:19:25.941640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.186 [2024-05-13 20:19:26.070860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.758 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.758 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:10.758 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2826830 00:06:10.758 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2826830 00:06:10.758 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.018 lslocks: write error 00:06:11.018 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2826830 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2826830 ']' 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2826830 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2826830 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2826830' 00:06:11.019 killing process with pid 2826830 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2826830 00:06:11.019 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2826830 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2827055 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2827055 ']' 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2827055 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2827055 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2827055' 00:06:11.587 killing process with pid 2827055 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2827055 00:06:11.587 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2827055 00:06:11.848 00:06:11.848 real 0m2.605s 00:06:11.848 user 0m2.847s 00:06:11.848 sys 0m0.767s 00:06:11.848 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.848 20:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 ************************************ 00:06:11.848 END TEST non_locking_app_on_locked_coremask 00:06:11.848 ************************************ 00:06:11.848 20:19:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.848 20:19:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.848 20:19:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.848 20:19:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 ************************************ 00:06:11.848 START TEST locking_app_on_unlocked_coremask 00:06:11.848 ************************************ 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2827529 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2827529 /var/tmp/spdk.sock 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2827529 ']' 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.848 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.848 [2024-05-13 20:19:27.704682] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:11.848 [2024-05-13 20:19:27.704733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827529 ] 00:06:11.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.849 [2024-05-13 20:19:27.772519] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.849 [2024-05-13 20:19:27.772550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.110 [2024-05-13 20:19:27.843161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2827548 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2827548 /var/tmp/spdk2.sock 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2827548 ']' 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.682 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.682 [2024-05-13 20:19:28.522085] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:12.682 [2024-05-13 20:19:28.522139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827548 ] 00:06:12.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.682 [2024-05-13 20:19:28.622885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.983 [2024-05-13 20:19:28.752210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.555 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.555 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:13.555 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2827548 00:06:13.555 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2827548 00:06:13.555 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.815 lslocks: write error 00:06:13.815 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2827529 00:06:13.815 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2827529 ']' 00:06:13.815 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2827529 00:06:13.815 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:13.815 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.815 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2827529 00:06:14.093 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.093 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.093 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2827529' 00:06:14.093 killing process with pid 2827529 00:06:14.094 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2827529 00:06:14.094 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2827529 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2827548 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2827548 ']' 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2827548 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2827548 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2827548' 00:06:14.392 killing process with pid 2827548 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2827548 00:06:14.392 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2827548 00:06:14.655 00:06:14.656 real 0m2.816s 00:06:14.656 user 0m3.064s 00:06:14.656 sys 0m0.849s 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.656 ************************************ 00:06:14.656 END TEST locking_app_on_unlocked_coremask 00:06:14.656 ************************************ 00:06:14.656 20:19:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:14.656 20:19:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.656 20:19:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.656 20:19:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.656 ************************************ 00:06:14.656 START TEST locking_app_on_locked_coremask 00:06:14.656 ************************************ 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2828017 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2828017 /var/tmp/spdk.sock 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2828017 ']' 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.656 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.916 [2024-05-13 20:19:30.601910] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:14.916 [2024-05-13 20:19:30.601957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828017 ] 00:06:14.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.916 [2024-05-13 20:19:30.666996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.916 [2024-05-13 20:19:30.731573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2828257 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2828257 /var/tmp/spdk2.sock 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2828257 /var/tmp/spdk2.sock 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2828257 /var/tmp/spdk2.sock 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2828257 ']' 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.486 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.486 [2024-05-13 20:19:31.407113] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:15.486 [2024-05-13 20:19:31.407164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828257 ] 00:06:15.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.746 [2024-05-13 20:19:31.505789] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2828017 has claimed it. 00:06:15.746 [2024-05-13 20:19:31.505826] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2828257) - No such process 00:06:16.316 ERROR: process (pid: 2828257) is no longer running 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2828017 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2828017 00:06:16.316 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.577 lslocks: write error 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2828017 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2828017 ']' 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2828017 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2828017 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2828017' 00:06:16.577 killing process with pid 2828017 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2828017 00:06:16.577 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2828017 00:06:16.837 00:06:16.837 real 0m2.166s 00:06:16.837 user 0m2.403s 00:06:16.837 sys 0m0.580s 00:06:16.837 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.837 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.837 ************************************ 00:06:16.837 END TEST locking_app_on_locked_coremask 00:06:16.837 ************************************ 00:06:16.837 20:19:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.837 20:19:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.837 20:19:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.837 20:19:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 ************************************ 00:06:17.097 START TEST locking_overlapped_coremask 00:06:17.097 ************************************ 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2828617 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2828617 /var/tmp/spdk.sock 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2828617 ']' 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.097 20:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 [2024-05-13 20:19:32.851267] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:17.097 [2024-05-13 20:19:32.851317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828617 ] 00:06:17.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.097 [2024-05-13 20:19:32.917450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.097 [2024-05-13 20:19:32.981856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.097 [2024-05-13 20:19:32.981973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.097 [2024-05-13 20:19:32.981976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2828642 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2828642 /var/tmp/spdk2.sock 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2828642 /var/tmp/spdk2.sock 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2828642 /var/tmp/spdk2.sock 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2828642 ']' 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.666 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 [2024-05-13 20:19:33.650762] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:17.926 [2024-05-13 20:19:33.650812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828642 ] 00:06:17.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.926 [2024-05-13 20:19:33.732701] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2828617 has claimed it. 00:06:17.926 [2024-05-13 20:19:33.732735] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2828642) - No such process 00:06:18.496 ERROR: process (pid: 2828642) is no longer running 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2828617 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2828617 ']' 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2828617 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2828617 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2828617' 00:06:18.496 killing process with pid 2828617 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2828617 00:06:18.496 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2828617 00:06:18.757 00:06:18.757 real 0m1.722s 00:06:18.757 user 0m4.874s 00:06:18.757 sys 0m0.347s 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 ************************************ 00:06:18.757 END TEST locking_overlapped_coremask 00:06:18.757 ************************************ 00:06:18.757 20:19:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.757 20:19:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.757 20:19:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.757 20:19:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 ************************************ 00:06:18.757 START TEST locking_overlapped_coremask_via_rpc 00:06:18.757 ************************************ 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2828995 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2828995 /var/tmp/spdk.sock 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2828995 ']' 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.757 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.758 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 [2024-05-13 20:19:34.647446] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:18.758 [2024-05-13 20:19:34.647501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828995 ] 00:06:18.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.018 [2024-05-13 20:19:34.716028] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.018 [2024-05-13 20:19:34.716060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.018 [2024-05-13 20:19:34.789615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.018 [2024-05-13 20:19:34.789788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.018 [2024-05-13 20:19:34.789791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2829068 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2829068 /var/tmp/spdk2.sock 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2829068 ']' 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.589 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.589 [2024-05-13 20:19:35.477331] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:19.589 [2024-05-13 20:19:35.477382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829068 ] 00:06:19.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.850 [2024-05-13 20:19:35.560244] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.850 [2024-05-13 20:19:35.560266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.850 [2024-05-13 20:19:35.666040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.850 [2024-05-13 20:19:35.666202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.850 [2024-05-13 20:19:35.666204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.420 [2024-05-13 20:19:36.258372] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2828995 has claimed it. 00:06:20.420 request: 00:06:20.420 { 00:06:20.420 "method": "framework_enable_cpumask_locks", 00:06:20.420 "req_id": 1 00:06:20.420 } 00:06:20.420 Got JSON-RPC error response 00:06:20.420 response: 00:06:20.420 { 00:06:20.420 "code": -32603, 00:06:20.420 "message": "Failed to claim CPU core: 2" 00:06:20.420 } 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2828995 /var/tmp/spdk.sock 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2828995 ']' 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.420 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2829068 /var/tmp/spdk2.sock 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2829068 ']' 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.680 00:06:20.680 real 0m1.998s 00:06:20.680 user 0m0.772s 00:06:20.680 sys 0m0.152s 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.680 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.680 ************************************ 00:06:20.680 END TEST locking_overlapped_coremask_via_rpc 00:06:20.680 ************************************ 00:06:20.940 20:19:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.940 20:19:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2828995 ]] 00:06:20.940 20:19:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2828995 00:06:20.940 20:19:36 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2828995 ']' 00:06:20.940 20:19:36 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2828995 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2828995 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2828995' 00:06:20.941 killing process with pid 2828995 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2828995 00:06:20.941 20:19:36 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2828995 00:06:21.200 20:19:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2829068 ]] 00:06:21.200 20:19:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2829068 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2829068 ']' 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2829068 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2829068 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2829068' 00:06:21.200 killing process with pid 2829068 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2829068 00:06:21.200 20:19:36 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2829068 00:06:21.200 20:19:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.200 20:19:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.200 20:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2828995 ]] 00:06:21.200 20:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2828995 00:06:21.200 20:19:37 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2828995 ']' 00:06:21.460 20:19:37 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2828995 00:06:21.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2828995) - No such process 00:06:21.460 20:19:37 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2828995 is not found' 00:06:21.460 Process with pid 2828995 is not found 00:06:21.460 20:19:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2829068 ]] 00:06:21.460 20:19:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2829068 00:06:21.460 20:19:37 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2829068 ']' 00:06:21.461 20:19:37 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2829068 00:06:21.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2829068) - No such process 00:06:21.461 20:19:37 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2829068 is not found' 00:06:21.461 Process with pid 2829068 is not found 00:06:21.461 20:19:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.461 00:06:21.461 real 0m15.657s 00:06:21.461 user 0m26.788s 00:06:21.461 sys 0m4.674s 00:06:21.461 20:19:37 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.461 20:19:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.461 ************************************ 00:06:21.461 END TEST cpu_locks 00:06:21.461 ************************************ 00:06:21.461 00:06:21.461 real 0m41.594s 00:06:21.461 user 1m21.587s 00:06:21.461 sys 0m7.829s 00:06:21.461 20:19:37 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.461 20:19:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.461 ************************************ 00:06:21.461 END TEST event 00:06:21.461 ************************************ 00:06:21.461 20:19:37 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:21.461 20:19:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.461 20:19:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.461 20:19:37 -- common/autotest_common.sh@10 -- # set +x 00:06:21.461 ************************************ 00:06:21.461 START TEST thread 00:06:21.461 ************************************ 00:06:21.461 20:19:37 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:21.461 * Looking for test storage... 00:06:21.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:21.461 20:19:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.461 20:19:37 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:21.461 20:19:37 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.461 20:19:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.461 ************************************ 00:06:21.461 START TEST thread_poller_perf 00:06:21.461 ************************************ 00:06:21.461 20:19:37 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.720 [2024-05-13 20:19:37.409747] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:21.720 [2024-05-13 20:19:37.409798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829652 ] 00:06:21.720 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.720 [2024-05-13 20:19:37.475672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.720 [2024-05-13 20:19:37.540694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.720 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.661 ====================================== 00:06:22.661 busy:2409469452 (cyc) 00:06:22.661 total_run_count: 288000 00:06:22.661 tsc_hz: 2400000000 (cyc) 00:06:22.661 ====================================== 00:06:22.661 poller_cost: 8366 (cyc), 3485 (nsec) 00:06:22.661 00:06:22.661 real 0m1.201s 00:06:22.661 user 0m1.130s 00:06:22.661 sys 0m0.068s 00:06:22.661 20:19:38 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.661 20:19:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.661 ************************************ 00:06:22.661 END TEST thread_poller_perf 00:06:22.661 ************************************ 00:06:22.921 20:19:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.921 20:19:38 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:22.921 20:19:38 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.921 20:19:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.921 ************************************ 00:06:22.921 START TEST thread_poller_perf 00:06:22.921 ************************************ 00:06:22.921 20:19:38 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.922 [2024-05-13 20:19:38.689827] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:22.922 [2024-05-13 20:19:38.689873] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829819 ] 00:06:22.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.922 [2024-05-13 20:19:38.754282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.922 [2024-05-13 20:19:38.818619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.922 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.304 ====================================== 00:06:24.304 busy:2401817268 (cyc) 00:06:24.304 total_run_count: 3812000 00:06:24.304 tsc_hz: 2400000000 (cyc) 00:06:24.304 ====================================== 00:06:24.304 poller_cost: 630 (cyc), 262 (nsec) 00:06:24.304 00:06:24.304 real 0m1.192s 00:06:24.304 user 0m1.122s 00:06:24.304 sys 0m0.066s 00:06:24.304 20:19:39 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.304 20:19:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.304 ************************************ 00:06:24.304 END TEST thread_poller_perf 00:06:24.304 ************************************ 00:06:24.304 20:19:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.304 00:06:24.304 real 0m2.641s 00:06:24.304 user 0m2.346s 00:06:24.304 sys 0m0.294s 00:06:24.304 20:19:39 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.304 20:19:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.304 ************************************ 00:06:24.304 END TEST thread 00:06:24.304 ************************************ 00:06:24.304 20:19:39 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:24.304 20:19:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.304 20:19:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.304 20:19:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.304 ************************************ 00:06:24.304 START TEST accel 00:06:24.304 ************************************ 00:06:24.304 20:19:39 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:24.304 * Looking for test storage... 00:06:24.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:24.304 20:19:40 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:24.304 20:19:40 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:24.304 20:19:40 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.304 20:19:40 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2830191 00:06:24.304 20:19:40 accel -- accel/accel.sh@63 -- # waitforlisten 2830191 00:06:24.304 20:19:40 accel -- common/autotest_common.sh@827 -- # '[' -z 2830191 ']' 00:06:24.304 20:19:40 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.304 20:19:40 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.304 20:19:40 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.304 20:19:40 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:24.304 20:19:40 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.304 20:19:40 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:24.304 20:19:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.304 20:19:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.304 20:19:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.304 20:19:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.304 20:19:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.304 20:19:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.304 20:19:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:24.304 20:19:40 accel -- accel/accel.sh@41 -- # jq -r . 00:06:24.304 [2024-05-13 20:19:40.148357] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:24.304 [2024-05-13 20:19:40.148433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830191 ] 00:06:24.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.304 [2024-05-13 20:19:40.219403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.564 [2024-05-13 20:19:40.294005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.134 20:19:40 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.134 20:19:40 accel -- common/autotest_common.sh@860 -- # return 0 00:06:25.134 20:19:40 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:25.134 20:19:40 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:25.134 20:19:40 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:25.134 20:19:40 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:25.134 20:19:40 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:25.134 20:19:40 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:25.134 20:19:40 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:25.134 20:19:40 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.134 20:19:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.134 20:19:40 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.134 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.134 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.134 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.134 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.134 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.134 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.134 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.134 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.134 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.134 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.134 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.135 20:19:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.135 20:19:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.135 20:19:40 accel -- accel/accel.sh@75 -- # killprocess 2830191 00:06:25.135 20:19:40 accel -- common/autotest_common.sh@946 -- # '[' -z 2830191 ']' 00:06:25.135 20:19:40 accel -- common/autotest_common.sh@950 -- # kill -0 2830191 00:06:25.135 20:19:40 accel -- common/autotest_common.sh@951 -- # uname 00:06:25.135 20:19:40 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.135 20:19:40 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2830191 00:06:25.135 20:19:41 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.135 20:19:41 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.135 20:19:41 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2830191' 00:06:25.135 killing process with pid 2830191 00:06:25.135 20:19:41 accel -- common/autotest_common.sh@965 -- # kill 2830191 00:06:25.135 20:19:41 accel -- common/autotest_common.sh@970 -- # wait 2830191 00:06:25.394 20:19:41 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:25.394 20:19:41 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:25.394 20:19:41 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:25.394 20:19:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.394 20:19:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.394 20:19:41 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:25.394 20:19:41 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:25.394 20:19:41 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.394 20:19:41 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:25.654 20:19:41 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:25.654 20:19:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:25.654 20:19:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.654 20:19:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.654 ************************************ 00:06:25.654 START TEST accel_missing_filename 00:06:25.654 ************************************ 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.654 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:25.654 20:19:41 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:25.654 [2024-05-13 20:19:41.416072] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:25.654 [2024-05-13 20:19:41.416169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830558 ] 00:06:25.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.654 [2024-05-13 20:19:41.484622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.654 [2024-05-13 20:19:41.549414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.654 [2024-05-13 20:19:41.581266] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.915 [2024-05-13 20:19:41.618652] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:25.915 A filename is required. 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.915 00:06:25.915 real 0m0.286s 00:06:25.915 user 0m0.203s 00:06:25.915 sys 0m0.103s 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.915 20:19:41 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:25.915 ************************************ 00:06:25.915 END TEST accel_missing_filename 00:06:25.915 ************************************ 00:06:25.915 20:19:41 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.915 20:19:41 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:25.915 20:19:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.915 20:19:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.915 ************************************ 00:06:25.915 START TEST accel_compress_verify 00:06:25.915 ************************************ 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.915 20:19:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:25.915 20:19:41 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:25.915 [2024-05-13 20:19:41.759968] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:25.915 [2024-05-13 20:19:41.760004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830588 ] 00:06:25.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.915 [2024-05-13 20:19:41.816196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.175 [2024-05-13 20:19:41.880557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.175 [2024-05-13 20:19:41.912348] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.175 [2024-05-13 20:19:41.949297] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:26.175 00:06:26.175 Compression does not support the verify option, aborting. 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.175 00:06:26.175 real 0m0.256s 00:06:26.175 user 0m0.202s 00:06:26.175 sys 0m0.095s 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.175 20:19:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.175 ************************************ 00:06:26.175 END TEST accel_compress_verify 00:06:26.175 ************************************ 00:06:26.175 20:19:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:26.175 20:19:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:26.175 20:19:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.175 20:19:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.175 ************************************ 00:06:26.175 START TEST accel_wrong_workload 00:06:26.175 ************************************ 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:26.175 20:19:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:26.175 Unsupported workload type: foobar 00:06:26.175 [2024-05-13 20:19:42.079693] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:26.175 accel_perf options: 00:06:26.175 [-h help message] 00:06:26.175 [-q queue depth per core] 00:06:26.175 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.175 [-T number of threads per core 00:06:26.175 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.175 [-t time in seconds] 00:06:26.175 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.175 [ dif_verify, , dif_generate, dif_generate_copy 00:06:26.175 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.175 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.175 [-S for crc32c workload, use this seed value (default 0) 00:06:26.175 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.175 [-f for fill workload, use this BYTE value (default 255) 00:06:26.175 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.175 [-y verify result if this switch is on] 00:06:26.175 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.175 Can be used to spread operations across a wider range of memory. 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.175 00:06:26.175 real 0m0.020s 00:06:26.175 user 0m0.012s 00:06:26.175 sys 0m0.008s 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.175 20:19:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:26.175 ************************************ 00:06:26.175 END TEST accel_wrong_workload 00:06:26.175 ************************************ 00:06:26.175 Error: writing output failed: Broken pipe 00:06:26.435 20:19:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:26.435 20:19:42 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:26.435 20:19:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.435 20:19:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.435 ************************************ 00:06:26.435 START TEST accel_negative_buffers 00:06:26.435 ************************************ 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.435 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:26.435 20:19:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:26.435 -x option must be non-negative. 00:06:26.435 [2024-05-13 20:19:42.170027] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:26.435 accel_perf options: 00:06:26.435 [-h help message] 00:06:26.435 [-q queue depth per core] 00:06:26.435 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.435 [-T number of threads per core 00:06:26.435 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.435 [-t time in seconds] 00:06:26.435 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.435 [ dif_verify, , dif_generate, dif_generate_copy 00:06:26.436 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.436 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.436 [-S for crc32c workload, use this seed value (default 0) 00:06:26.436 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.436 [-f for fill workload, use this BYTE value (default 255) 00:06:26.436 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.436 [-y verify result if this switch is on] 00:06:26.436 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.436 Can be used to spread operations across a wider range of memory. 00:06:26.436 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:26.436 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.436 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.436 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.436 00:06:26.436 real 0m0.034s 00:06:26.436 user 0m0.019s 00:06:26.436 sys 0m0.015s 00:06:26.436 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.436 20:19:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:26.436 ************************************ 00:06:26.436 END TEST accel_negative_buffers 00:06:26.436 ************************************ 00:06:26.436 Error: writing output failed: Broken pipe 00:06:26.436 20:19:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:26.436 20:19:42 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:26.436 20:19:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.436 20:19:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.436 ************************************ 00:06:26.436 START TEST accel_crc32c 00:06:26.436 ************************************ 00:06:26.436 20:19:42 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:26.436 20:19:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:26.436 [2024-05-13 20:19:42.282289] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:26.436 [2024-05-13 20:19:42.282400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830757 ] 00:06:26.436 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.436 [2024-05-13 20:19:42.353456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.696 [2024-05-13 20:19:42.427768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.696 20:19:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:27.634 20:19:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.634 00:06:27.634 real 0m1.304s 00:06:27.634 user 0m1.205s 00:06:27.634 sys 0m0.111s 00:06:27.634 20:19:43 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.634 20:19:43 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:27.634 ************************************ 00:06:27.634 END TEST accel_crc32c 00:06:27.634 ************************************ 00:06:27.895 20:19:43 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:27.895 20:19:43 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:27.895 20:19:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.895 20:19:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.895 ************************************ 00:06:27.895 START TEST accel_crc32c_C2 00:06:27.895 ************************************ 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:27.895 [2024-05-13 20:19:43.657117] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:27.895 [2024-05-13 20:19:43.657175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831002 ] 00:06:27.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.895 [2024-05-13 20:19:43.723727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.895 [2024-05-13 20:19:43.789477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.895 20:19:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.280 00:06:29.280 real 0m1.288s 00:06:29.280 user 0m1.193s 00:06:29.280 sys 0m0.105s 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.280 20:19:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:29.280 ************************************ 00:06:29.280 END TEST accel_crc32c_C2 00:06:29.280 ************************************ 00:06:29.280 20:19:44 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:29.280 20:19:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:29.280 20:19:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.280 20:19:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.280 ************************************ 00:06:29.280 START TEST accel_copy 00:06:29.280 ************************************ 00:06:29.280 20:19:44 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.280 20:19:44 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.281 20:19:44 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.281 20:19:44 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:29.281 20:19:44 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:29.281 [2024-05-13 20:19:45.024612] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:29.281 [2024-05-13 20:19:45.024702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831354 ] 00:06:29.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.281 [2024-05-13 20:19:45.091966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.281 [2024-05-13 20:19:45.157547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.281 20:19:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:30.665 20:19:46 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.665 00:06:30.665 real 0m1.291s 00:06:30.665 user 0m1.189s 00:06:30.665 sys 0m0.112s 00:06:30.665 20:19:46 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.665 20:19:46 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:30.665 ************************************ 00:06:30.665 END TEST accel_copy 00:06:30.665 ************************************ 00:06:30.665 20:19:46 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.665 20:19:46 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:30.665 20:19:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.665 20:19:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.665 ************************************ 00:06:30.665 START TEST accel_fill 00:06:30.665 ************************************ 00:06:30.665 20:19:46 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:30.665 [2024-05-13 20:19:46.387786] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:30.665 [2024-05-13 20:19:46.387846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831703 ] 00:06:30.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.665 [2024-05-13 20:19:46.455324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.665 [2024-05-13 20:19:46.524781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.665 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.666 20:19:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:32.049 20:19:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.049 00:06:32.049 real 0m1.295s 00:06:32.049 user 0m1.198s 00:06:32.049 sys 0m0.107s 00:06:32.049 20:19:47 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.049 20:19:47 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:32.049 ************************************ 00:06:32.049 END TEST accel_fill 00:06:32.049 ************************************ 00:06:32.049 20:19:47 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:32.049 20:19:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:32.049 20:19:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.049 20:19:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.049 ************************************ 00:06:32.049 START TEST accel_copy_crc32c 00:06:32.049 ************************************ 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:32.049 [2024-05-13 20:19:47.744779] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:32.049 [2024-05-13 20:19:47.744825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832058 ] 00:06:32.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.049 [2024-05-13 20:19:47.809208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.049 [2024-05-13 20:19:47.873541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.049 20:19:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.433 00:06:33.433 real 0m1.271s 00:06:33.433 user 0m1.198s 00:06:33.433 sys 0m0.086s 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.433 20:19:49 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:33.433 ************************************ 00:06:33.433 END TEST accel_copy_crc32c 00:06:33.433 ************************************ 00:06:33.433 20:19:49 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:33.433 20:19:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:33.433 20:19:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.433 20:19:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.433 ************************************ 00:06:33.433 START TEST accel_copy_crc32c_C2 00:06:33.433 ************************************ 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:33.433 [2024-05-13 20:19:49.089065] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:33.433 [2024-05-13 20:19:49.089108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832248 ] 00:06:33.433 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.433 [2024-05-13 20:19:49.153372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.433 [2024-05-13 20:19:49.218417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.433 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.434 20:19:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.817 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.817 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.817 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.817 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.817 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.817 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.818 00:06:34.818 real 0m1.273s 00:06:34.818 user 0m1.195s 00:06:34.818 sys 0m0.091s 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.818 20:19:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:34.818 ************************************ 00:06:34.818 END TEST accel_copy_crc32c_C2 00:06:34.818 ************************************ 00:06:34.818 20:19:50 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:34.818 20:19:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:34.818 20:19:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.818 20:19:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.818 ************************************ 00:06:34.818 START TEST accel_dualcast 00:06:34.818 ************************************ 00:06:34.818 20:19:50 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:34.818 [2024-05-13 20:19:50.461006] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:34.818 [2024-05-13 20:19:50.461075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832450 ] 00:06:34.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.818 [2024-05-13 20:19:50.531631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.818 [2024-05-13 20:19:50.598556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.818 20:19:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:36.204 20:19:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.204 00:06:36.204 real 0m1.296s 00:06:36.204 user 0m1.192s 00:06:36.204 sys 0m0.114s 00:06:36.204 20:19:51 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.204 20:19:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:36.204 ************************************ 00:06:36.204 END TEST accel_dualcast 00:06:36.204 ************************************ 00:06:36.204 20:19:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:36.204 20:19:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:36.204 20:19:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.204 20:19:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.204 ************************************ 00:06:36.204 START TEST accel_compare 00:06:36.204 ************************************ 00:06:36.204 20:19:51 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:36.204 [2024-05-13 20:19:51.829992] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:36.204 [2024-05-13 20:19:51.830083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832797 ] 00:06:36.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.204 [2024-05-13 20:19:51.897096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.204 [2024-05-13 20:19:51.961986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.204 20:19:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.146 20:19:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.407 20:19:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.407 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.407 20:19:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.407 20:19:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.407 20:19:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:37.407 20:19:53 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.407 00:06:37.407 real 0m1.290s 00:06:37.407 user 0m1.192s 00:06:37.407 sys 0m0.109s 00:06:37.407 20:19:53 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.407 20:19:53 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:37.407 ************************************ 00:06:37.407 END TEST accel_compare 00:06:37.407 ************************************ 00:06:37.407 20:19:53 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:37.407 20:19:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:37.407 20:19:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.407 20:19:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.407 ************************************ 00:06:37.407 START TEST accel_xor 00:06:37.407 ************************************ 00:06:37.407 20:19:53 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:37.407 20:19:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:37.407 [2024-05-13 20:19:53.201887] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:37.407 [2024-05-13 20:19:53.201986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833145 ] 00:06:37.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.407 [2024-05-13 20:19:53.274253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.407 [2024-05-13 20:19:53.343887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.667 20:19:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.668 20:19:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.609 00:06:38.609 real 0m1.304s 00:06:38.609 user 0m1.204s 00:06:38.609 sys 0m0.111s 00:06:38.609 20:19:54 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.609 20:19:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:38.609 ************************************ 00:06:38.609 END TEST accel_xor 00:06:38.609 ************************************ 00:06:38.609 20:19:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:38.609 20:19:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:38.609 20:19:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.609 20:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.609 ************************************ 00:06:38.609 START TEST accel_xor 00:06:38.609 ************************************ 00:06:38.609 20:19:54 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.609 20:19:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:38.871 [2024-05-13 20:19:54.561253] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:38.871 [2024-05-13 20:19:54.561300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833494 ] 00:06:38.871 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.871 [2024-05-13 20:19:54.624507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.871 [2024-05-13 20:19:54.689000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.871 20:19:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:40.257 20:19:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.257 00:06:40.257 real 0m1.270s 00:06:40.257 user 0m1.188s 00:06:40.257 sys 0m0.094s 00:06:40.257 20:19:55 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.257 20:19:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:40.257 ************************************ 00:06:40.257 END TEST accel_xor 00:06:40.257 ************************************ 00:06:40.257 20:19:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:40.257 20:19:55 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:40.257 20:19:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.257 20:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.257 ************************************ 00:06:40.257 START TEST accel_dif_verify 00:06:40.257 ************************************ 00:06:40.257 20:19:55 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:40.257 20:19:55 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:40.258 20:19:55 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:40.258 [2024-05-13 20:19:55.922086] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:40.258 [2024-05-13 20:19:55.922177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833763 ] 00:06:40.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.258 [2024-05-13 20:19:55.990310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.258 [2024-05-13 20:19:56.057657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.258 20:19:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:41.302 20:19:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.302 00:06:41.302 real 0m1.294s 00:06:41.302 user 0m1.201s 00:06:41.302 sys 0m0.105s 00:06:41.302 20:19:57 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.302 20:19:57 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:41.302 ************************************ 00:06:41.302 END TEST accel_dif_verify 00:06:41.302 ************************************ 00:06:41.563 20:19:57 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:41.563 20:19:57 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:41.563 20:19:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.563 20:19:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.563 ************************************ 00:06:41.563 START TEST accel_dif_generate 00:06:41.563 ************************************ 00:06:41.563 20:19:57 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:41.563 [2024-05-13 20:19:57.295597] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:41.563 [2024-05-13 20:19:57.295689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833953 ] 00:06:41.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.563 [2024-05-13 20:19:57.364679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.563 [2024-05-13 20:19:57.434442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:41.563 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.564 20:19:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:42.947 20:19:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.947 00:06:42.947 real 0m1.297s 00:06:42.947 user 0m1.205s 00:06:42.947 sys 0m0.105s 00:06:42.947 20:19:58 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.947 20:19:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:42.947 ************************************ 00:06:42.947 END TEST accel_dif_generate 00:06:42.947 ************************************ 00:06:42.947 20:19:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:42.947 20:19:58 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:42.947 20:19:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.947 20:19:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.947 ************************************ 00:06:42.947 START TEST accel_dif_generate_copy 00:06:42.947 ************************************ 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:42.947 [2024-05-13 20:19:58.672748] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:42.947 [2024-05-13 20:19:58.672848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834236 ] 00:06:42.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.947 [2024-05-13 20:19:58.746262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.947 [2024-05-13 20:19:58.811122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.947 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.948 20:19:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.332 00:06:44.332 real 0m1.297s 00:06:44.332 user 0m1.208s 00:06:44.332 sys 0m0.100s 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.332 20:19:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.332 ************************************ 00:06:44.332 END TEST accel_dif_generate_copy 00:06:44.332 ************************************ 00:06:44.332 20:19:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:44.332 20:19:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.332 20:19:59 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:44.332 20:19:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.332 20:19:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.332 ************************************ 00:06:44.332 START TEST accel_comp 00:06:44.332 ************************************ 00:06:44.332 20:20:00 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:44.332 [2024-05-13 20:20:00.050865] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:44.332 [2024-05-13 20:20:00.050980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834591 ] 00:06:44.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.332 [2024-05-13 20:20:00.131388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.332 [2024-05-13 20:20:00.200606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.332 20:20:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.333 20:20:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.333 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.333 20:20:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:45.715 20:20:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.715 00:06:45.715 real 0m1.313s 00:06:45.715 user 0m1.205s 00:06:45.715 sys 0m0.119s 00:06:45.715 20:20:01 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.715 20:20:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:45.715 ************************************ 00:06:45.715 END TEST accel_comp 00:06:45.715 ************************************ 00:06:45.715 20:20:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.715 20:20:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:45.715 20:20:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.715 20:20:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.715 ************************************ 00:06:45.715 START TEST accel_decomp 00:06:45.715 ************************************ 00:06:45.715 20:20:01 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:45.715 [2024-05-13 20:20:01.438711] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:45.715 [2024-05-13 20:20:01.438774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834941 ] 00:06:45.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.715 [2024-05-13 20:20:01.506582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.715 [2024-05-13 20:20:01.573657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.715 20:20:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.098 20:20:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.099 20:20:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.099 00:06:47.099 real 0m1.295s 00:06:47.099 user 0m1.213s 00:06:47.099 sys 0m0.095s 00:06:47.099 20:20:02 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.099 20:20:02 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:47.099 ************************************ 00:06:47.099 END TEST accel_decomp 00:06:47.099 ************************************ 00:06:47.099 20:20:02 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.099 20:20:02 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:47.099 20:20:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.099 20:20:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.099 ************************************ 00:06:47.099 START TEST accel_decmop_full 00:06:47.099 ************************************ 00:06:47.099 20:20:02 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:47.099 [2024-05-13 20:20:02.812513] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:47.099 [2024-05-13 20:20:02.812581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835286 ] 00:06:47.099 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.099 [2024-05-13 20:20:02.881224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.099 [2024-05-13 20:20:02.951200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:47.099 20:20:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.099 20:20:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.099 20:20:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.487 20:20:04 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.487 00:06:48.487 real 0m1.314s 00:06:48.487 user 0m1.216s 00:06:48.487 sys 0m0.112s 00:06:48.487 20:20:04 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.487 20:20:04 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:48.487 ************************************ 00:06:48.487 END TEST accel_decmop_full 00:06:48.487 ************************************ 00:06:48.487 20:20:04 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.487 20:20:04 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:48.487 20:20:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.487 20:20:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.487 ************************************ 00:06:48.487 START TEST accel_decomp_mcore 00:06:48.487 ************************************ 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:48.487 [2024-05-13 20:20:04.204179] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:48.487 [2024-05-13 20:20:04.204262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835497 ] 00:06:48.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.487 [2024-05-13 20:20:04.275582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.487 [2024-05-13 20:20:04.351311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.487 [2024-05-13 20:20:04.351451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.487 [2024-05-13 20:20:04.351454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.487 [2024-05-13 20:20:04.351429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.487 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.488 20:20:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.869 00:06:49.869 real 0m1.314s 00:06:49.869 user 0m4.447s 00:06:49.869 sys 0m0.116s 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.869 20:20:05 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:49.869 ************************************ 00:06:49.869 END TEST accel_decomp_mcore 00:06:49.869 ************************************ 00:06:49.869 20:20:05 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.869 20:20:05 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:49.869 20:20:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.869 20:20:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.869 ************************************ 00:06:49.869 START TEST accel_decomp_full_mcore 00:06:49.869 ************************************ 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:49.869 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:49.869 [2024-05-13 20:20:05.599703] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:49.869 [2024-05-13 20:20:05.599771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835703 ] 00:06:49.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.869 [2024-05-13 20:20:05.668924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.870 [2024-05-13 20:20:05.739411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.870 [2024-05-13 20:20:05.739627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.870 [2024-05-13 20:20:05.739748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.870 [2024-05-13 20:20:05.739752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.870 20:20:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.252 00:06:51.252 real 0m1.320s 00:06:51.252 user 0m4.484s 00:06:51.252 sys 0m0.120s 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.252 20:20:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:51.252 ************************************ 00:06:51.252 END TEST accel_decomp_full_mcore 00:06:51.252 ************************************ 00:06:51.252 20:20:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.252 20:20:06 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:51.252 20:20:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.252 20:20:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.252 ************************************ 00:06:51.252 START TEST accel_decomp_mthread 00:06:51.252 ************************************ 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:51.252 20:20:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:51.252 [2024-05-13 20:20:06.999807] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:51.252 [2024-05-13 20:20:06.999926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836041 ] 00:06:51.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.252 [2024-05-13 20:20:07.077389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.252 [2024-05-13 20:20:07.142942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.252 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.253 20:20:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.636 00:06:52.636 real 0m1.309s 00:06:52.636 user 0m1.204s 00:06:52.636 sys 0m0.116s 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.636 20:20:08 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:52.636 ************************************ 00:06:52.636 END TEST accel_decomp_mthread 00:06:52.636 ************************************ 00:06:52.636 20:20:08 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.636 20:20:08 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:52.636 20:20:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.637 20:20:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.637 ************************************ 00:06:52.637 START TEST accel_decomp_full_mthread 00:06:52.637 ************************************ 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:52.637 [2024-05-13 20:20:08.385087] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:52.637 [2024-05-13 20:20:08.385150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836388 ] 00:06:52.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.637 [2024-05-13 20:20:08.452847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.637 [2024-05-13 20:20:08.518751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.637 20:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.020 00:06:54.020 real 0m1.324s 00:06:54.020 user 0m1.232s 00:06:54.020 sys 0m0.104s 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.020 20:20:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:54.020 ************************************ 00:06:54.020 END TEST accel_decomp_full_mthread 00:06:54.020 ************************************ 00:06:54.020 20:20:09 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:54.020 20:20:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:54.020 20:20:09 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:54.020 20:20:09 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:54.020 20:20:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.020 20:20:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.020 20:20:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.020 20:20:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.020 20:20:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.020 20:20:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.020 20:20:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.020 20:20:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:54.020 20:20:09 accel -- accel/accel.sh@41 -- # jq -r . 00:06:54.020 ************************************ 00:06:54.020 START TEST accel_dif_functional_tests 00:06:54.020 ************************************ 00:06:54.020 20:20:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:54.020 [2024-05-13 20:20:09.812488] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:54.020 [2024-05-13 20:20:09.812538] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836738 ] 00:06:54.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.020 [2024-05-13 20:20:09.879484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.020 [2024-05-13 20:20:09.950991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.020 [2024-05-13 20:20:09.951110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.020 [2024-05-13 20:20:09.951113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.280 00:06:54.280 00:06:54.280 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.280 http://cunit.sourceforge.net/ 00:06:54.280 00:06:54.280 00:06:54.280 Suite: accel_dif 00:06:54.280 Test: verify: DIF generated, GUARD check ...passed 00:06:54.280 Test: verify: DIF generated, APPTAG check ...passed 00:06:54.280 Test: verify: DIF generated, REFTAG check ...passed 00:06:54.280 Test: verify: DIF not generated, GUARD check ...[2024-05-13 20:20:10.007507] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:54.280 [2024-05-13 20:20:10.007548] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:54.280 passed 00:06:54.280 Test: verify: DIF not generated, APPTAG check ...[2024-05-13 20:20:10.007583] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:54.280 [2024-05-13 20:20:10.007598] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:54.280 passed 00:06:54.280 Test: verify: DIF not generated, REFTAG check ...[2024-05-13 20:20:10.007615] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:54.280 [2024-05-13 20:20:10.007630] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:54.280 passed 00:06:54.280 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:54.280 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-13 20:20:10.007674] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:54.280 passed 00:06:54.280 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:54.281 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:54.281 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:54.281 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-13 20:20:10.007790] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:54.281 passed 00:06:54.281 Test: generate copy: DIF generated, GUARD check ...passed 00:06:54.281 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:54.281 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:54.281 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:54.281 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:54.281 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:54.281 Test: generate copy: iovecs-len validate ...[2024-05-13 20:20:10.007979] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:54.281 passed 00:06:54.281 Test: generate copy: buffer alignment validate ...passed 00:06:54.281 00:06:54.281 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.281 suites 1 1 n/a 0 0 00:06:54.281 tests 20 20 20 0 0 00:06:54.281 asserts 204 204 204 0 n/a 00:06:54.281 00:06:54.281 Elapsed time = 0.000 seconds 00:06:54.281 00:06:54.281 real 0m0.360s 00:06:54.281 user 0m0.463s 00:06:54.281 sys 0m0.124s 00:06:54.281 20:20:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.281 20:20:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:54.281 ************************************ 00:06:54.281 END TEST accel_dif_functional_tests 00:06:54.281 ************************************ 00:06:54.281 00:06:54.281 real 0m30.178s 00:06:54.281 user 0m33.685s 00:06:54.281 sys 0m4.131s 00:06:54.281 20:20:10 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.281 20:20:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.281 ************************************ 00:06:54.281 END TEST accel 00:06:54.281 ************************************ 00:06:54.281 20:20:10 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:54.281 20:20:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.281 20:20:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.281 20:20:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.541 ************************************ 00:06:54.541 START TEST accel_rpc 00:06:54.541 ************************************ 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:54.541 * Looking for test storage... 00:06:54.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:54.541 20:20:10 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.541 20:20:10 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2836808 00:06:54.541 20:20:10 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2836808 00:06:54.541 20:20:10 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2836808 ']' 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.541 20:20:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.541 [2024-05-13 20:20:10.402030] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:54.541 [2024-05-13 20:20:10.402086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836808 ] 00:06:54.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.541 [2024-05-13 20:20:10.474285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.801 [2024-05-13 20:20:10.547962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.370 20:20:11 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.370 20:20:11 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:55.370 20:20:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:55.370 20:20:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:55.370 20:20:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:55.370 20:20:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:55.370 20:20:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:55.370 20:20:11 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.370 20:20:11 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.370 20:20:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.370 ************************************ 00:06:55.370 START TEST accel_assign_opcode 00:06:55.370 ************************************ 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.370 [2024-05-13 20:20:11.201875] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.370 [2024-05-13 20:20:11.209889] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.370 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.630 software 00:06:55.630 00:06:55.630 real 0m0.207s 00:06:55.630 user 0m0.048s 00:06:55.630 sys 0m0.013s 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.630 20:20:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.630 ************************************ 00:06:55.630 END TEST accel_assign_opcode 00:06:55.630 ************************************ 00:06:55.630 20:20:11 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2836808 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2836808 ']' 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2836808 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2836808 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2836808' 00:06:55.630 killing process with pid 2836808 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@965 -- # kill 2836808 00:06:55.630 20:20:11 accel_rpc -- common/autotest_common.sh@970 -- # wait 2836808 00:06:55.890 00:06:55.890 real 0m1.452s 00:06:55.890 user 0m1.536s 00:06:55.890 sys 0m0.394s 00:06:55.890 20:20:11 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.890 20:20:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.890 ************************************ 00:06:55.890 END TEST accel_rpc 00:06:55.890 ************************************ 00:06:55.890 20:20:11 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:55.890 20:20:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.890 20:20:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.890 20:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.890 ************************************ 00:06:55.890 START TEST app_cmdline 00:06:55.890 ************************************ 00:06:55.890 20:20:11 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.150 * Looking for test storage... 00:06:56.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:56.150 20:20:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:56.150 20:20:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2837214 00:06:56.150 20:20:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2837214 00:06:56.150 20:20:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:56.150 20:20:11 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2837214 ']' 00:06:56.150 20:20:11 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.150 20:20:11 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.150 20:20:11 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.150 20:20:11 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.150 20:20:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.150 [2024-05-13 20:20:11.934896] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:56.150 [2024-05-13 20:20:11.934946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837214 ] 00:06:56.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.150 [2024-05-13 20:20:12.003589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.150 [2024-05-13 20:20:12.072654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:57.091 { 00:06:57.091 "version": "SPDK v24.05-pre git sha1 b084cba07", 00:06:57.091 "fields": { 00:06:57.091 "major": 24, 00:06:57.091 "minor": 5, 00:06:57.091 "patch": 0, 00:06:57.091 "suffix": "-pre", 00:06:57.091 "commit": "b084cba07" 00:06:57.091 } 00:06:57.091 } 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.091 20:20:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:57.091 20:20:12 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.351 request: 00:06:57.351 { 00:06:57.351 "method": "env_dpdk_get_mem_stats", 00:06:57.351 "req_id": 1 00:06:57.351 } 00:06:57.351 Got JSON-RPC error response 00:06:57.351 response: 00:06:57.351 { 00:06:57.351 "code": -32601, 00:06:57.351 "message": "Method not found" 00:06:57.351 } 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.351 20:20:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2837214 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2837214 ']' 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2837214 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.351 20:20:13 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2837214 00:06:57.352 20:20:13 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.352 20:20:13 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.352 20:20:13 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2837214' 00:06:57.352 killing process with pid 2837214 00:06:57.352 20:20:13 app_cmdline -- common/autotest_common.sh@965 -- # kill 2837214 00:06:57.352 20:20:13 app_cmdline -- common/autotest_common.sh@970 -- # wait 2837214 00:06:57.612 00:06:57.612 real 0m1.531s 00:06:57.612 user 0m1.846s 00:06:57.612 sys 0m0.389s 00:06:57.612 20:20:13 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.612 20:20:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.612 ************************************ 00:06:57.612 END TEST app_cmdline 00:06:57.612 ************************************ 00:06:57.612 20:20:13 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:57.612 20:20:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.612 20:20:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.612 20:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.612 ************************************ 00:06:57.612 START TEST version 00:06:57.612 ************************************ 00:06:57.612 20:20:13 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:57.612 * Looking for test storage... 00:06:57.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:57.612 20:20:13 version -- app/version.sh@17 -- # get_header_version major 00:06:57.612 20:20:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # cut -f2 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.612 20:20:13 version -- app/version.sh@17 -- # major=24 00:06:57.612 20:20:13 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.612 20:20:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # cut -f2 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.612 20:20:13 version -- app/version.sh@18 -- # minor=5 00:06:57.612 20:20:13 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.612 20:20:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # cut -f2 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.612 20:20:13 version -- app/version.sh@19 -- # patch=0 00:06:57.612 20:20:13 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.612 20:20:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # cut -f2 00:06:57.612 20:20:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.612 20:20:13 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.612 20:20:13 version -- app/version.sh@22 -- # version=24.5 00:06:57.612 20:20:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.612 20:20:13 version -- app/version.sh@28 -- # version=24.5rc0 00:06:57.612 20:20:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.612 20:20:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.873 20:20:13 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:57.873 20:20:13 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:57.873 00:06:57.873 real 0m0.166s 00:06:57.873 user 0m0.081s 00:06:57.873 sys 0m0.121s 00:06:57.873 20:20:13 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.873 20:20:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 ************************************ 00:06:57.873 END TEST version 00:06:57.873 ************************************ 00:06:57.873 20:20:13 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@194 -- # uname -s 00:06:57.873 20:20:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:57.873 20:20:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.873 20:20:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.873 20:20:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:57.873 20:20:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.873 20:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 20:20:13 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:57.873 20:20:13 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:57.873 20:20:13 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:57.873 20:20:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:57.873 20:20:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.873 20:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 ************************************ 00:06:57.873 START TEST nvmf_tcp 00:06:57.873 ************************************ 00:06:57.873 20:20:13 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:57.874 * Looking for test storage... 00:06:57.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.874 20:20:13 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.135 20:20:13 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.135 20:20:13 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.135 20:20:13 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.135 20:20:13 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.135 20:20:13 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.135 20:20:13 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.135 20:20:13 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:58.135 20:20:13 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:58.135 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:58.136 20:20:13 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:58.136 20:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.136 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:58.136 20:20:13 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:58.136 20:20:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:58.136 20:20:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.136 20:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.136 ************************************ 00:06:58.136 START TEST nvmf_example 00:06:58.136 ************************************ 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:58.136 * Looking for test storage... 00:06:58.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.136 20:20:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.136 20:20:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:58.136 20:20:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:58.136 20:20:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.136 20:20:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.271 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:06.272 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:06.272 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:06.272 Found net devices under 0000:31:00.0: cvl_0_0 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:06.272 Found net devices under 0000:31:00.1: cvl_0_1 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:07:06.272 00:07:06.272 --- 10.0.0.2 ping statistics --- 00:07:06.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.272 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.466 ms 00:07:06.272 00:07:06.272 --- 10.0.0.1 ping statistics --- 00:07:06.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.272 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2841988 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2841988 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2841988 ']' 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.272 20:20:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.272 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:06.842 20:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:06.842 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.067 Initializing NVMe Controllers 00:07:19.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.067 Initialization complete. Launching workers. 00:07:19.067 ======================================================== 00:07:19.067 Latency(us) 00:07:19.067 Device Information : IOPS MiB/s Average min max 00:07:19.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17665.72 69.01 3622.46 827.93 15255.29 00:07:19.067 ======================================================== 00:07:19.067 Total : 17665.72 69.01 3622.46 827.93 15255.29 00:07:19.067 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.067 rmmod nvme_tcp 00:07:19.067 rmmod nvme_fabrics 00:07:19.067 rmmod nvme_keyring 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2841988 ']' 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2841988 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2841988 ']' 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2841988 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.067 20:20:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2841988 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2841988' 00:07:19.067 killing process with pid 2841988 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2841988 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2841988 00:07:19.067 nvmf threads initialize successfully 00:07:19.067 bdev subsystem init successfully 00:07:19.067 created a nvmf target service 00:07:19.067 create targets's poll groups done 00:07:19.067 all subsystems of target started 00:07:19.067 nvmf target is running 00:07:19.067 all subsystems of target stopped 00:07:19.067 destroy targets's poll groups done 00:07:19.067 destroyed the nvmf target service 00:07:19.067 bdev subsystem finish successfully 00:07:19.067 nvmf threads destroy successfully 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.067 20:20:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.327 20:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.327 20:20:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:19.327 20:20:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.327 20:20:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.327 00:07:19.327 real 0m21.384s 00:07:19.327 user 0m46.488s 00:07:19.327 sys 0m6.827s 00:07:19.327 20:20:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.327 20:20:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.327 ************************************ 00:07:19.327 END TEST nvmf_example 00:07:19.327 ************************************ 00:07:19.591 20:20:35 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:19.591 20:20:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:19.591 20:20:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.591 20:20:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.591 ************************************ 00:07:19.591 START TEST nvmf_filesystem 00:07:19.591 ************************************ 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:19.591 * Looking for test storage... 00:07:19.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:19.591 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:19.592 #define SPDK_CONFIG_H 00:07:19.592 #define SPDK_CONFIG_APPS 1 00:07:19.592 #define SPDK_CONFIG_ARCH native 00:07:19.592 #undef SPDK_CONFIG_ASAN 00:07:19.592 #undef SPDK_CONFIG_AVAHI 00:07:19.592 #undef SPDK_CONFIG_CET 00:07:19.592 #define SPDK_CONFIG_COVERAGE 1 00:07:19.592 #define SPDK_CONFIG_CROSS_PREFIX 00:07:19.592 #undef SPDK_CONFIG_CRYPTO 00:07:19.592 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:19.592 #undef SPDK_CONFIG_CUSTOMOCF 00:07:19.592 #undef SPDK_CONFIG_DAOS 00:07:19.592 #define SPDK_CONFIG_DAOS_DIR 00:07:19.592 #define SPDK_CONFIG_DEBUG 1 00:07:19.592 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:19.592 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:19.592 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:19.592 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:19.592 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:19.592 #undef SPDK_CONFIG_DPDK_UADK 00:07:19.592 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:19.592 #define SPDK_CONFIG_EXAMPLES 1 00:07:19.592 #undef SPDK_CONFIG_FC 00:07:19.592 #define SPDK_CONFIG_FC_PATH 00:07:19.592 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:19.592 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:19.592 #undef SPDK_CONFIG_FUSE 00:07:19.592 #undef SPDK_CONFIG_FUZZER 00:07:19.592 #define SPDK_CONFIG_FUZZER_LIB 00:07:19.592 #undef SPDK_CONFIG_GOLANG 00:07:19.592 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:19.592 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:19.592 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:19.592 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:19.592 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:19.592 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:19.592 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:19.592 #define SPDK_CONFIG_IDXD 1 00:07:19.592 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:19.592 #undef SPDK_CONFIG_IPSEC_MB 00:07:19.592 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:19.592 #define SPDK_CONFIG_ISAL 1 00:07:19.592 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:19.592 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:19.592 #define SPDK_CONFIG_LIBDIR 00:07:19.592 #undef SPDK_CONFIG_LTO 00:07:19.592 #define SPDK_CONFIG_MAX_LCORES 00:07:19.592 #define SPDK_CONFIG_NVME_CUSE 1 00:07:19.592 #undef SPDK_CONFIG_OCF 00:07:19.592 #define SPDK_CONFIG_OCF_PATH 00:07:19.592 #define SPDK_CONFIG_OPENSSL_PATH 00:07:19.592 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:19.592 #define SPDK_CONFIG_PGO_DIR 00:07:19.592 #undef SPDK_CONFIG_PGO_USE 00:07:19.592 #define SPDK_CONFIG_PREFIX /usr/local 00:07:19.592 #undef SPDK_CONFIG_RAID5F 00:07:19.592 #undef SPDK_CONFIG_RBD 00:07:19.592 #define SPDK_CONFIG_RDMA 1 00:07:19.592 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:19.592 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:19.592 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:19.592 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:19.592 #define SPDK_CONFIG_SHARED 1 00:07:19.592 #undef SPDK_CONFIG_SMA 00:07:19.592 #define SPDK_CONFIG_TESTS 1 00:07:19.592 #undef SPDK_CONFIG_TSAN 00:07:19.592 #define SPDK_CONFIG_UBLK 1 00:07:19.592 #define SPDK_CONFIG_UBSAN 1 00:07:19.592 #undef SPDK_CONFIG_UNIT_TESTS 00:07:19.592 #undef SPDK_CONFIG_URING 00:07:19.592 #define SPDK_CONFIG_URING_PATH 00:07:19.592 #undef SPDK_CONFIG_URING_ZNS 00:07:19.592 #undef SPDK_CONFIG_USDT 00:07:19.592 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:19.592 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:19.592 #undef SPDK_CONFIG_VFIO_USER 00:07:19.592 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:19.592 #define SPDK_CONFIG_VHOST 1 00:07:19.592 #define SPDK_CONFIG_VIRTIO 1 00:07:19.592 #undef SPDK_CONFIG_VTUNE 00:07:19.592 #define SPDK_CONFIG_VTUNE_DIR 00:07:19.592 #define SPDK_CONFIG_WERROR 1 00:07:19.592 #define SPDK_CONFIG_WPDK_DIR 00:07:19.592 #undef SPDK_CONFIG_XNVME 00:07:19.592 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.592 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:19.593 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:19.594 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2844790 ]] 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2844790 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.wJjgCM 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wJjgCM/tests/target /tmp/spdk.wJjgCM 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:19.857 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972746752 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311683072 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=120702177280 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129371009024 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8668831744 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64680792064 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685502464 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25864232960 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874202624 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9969664 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=189440 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=314368 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64684474368 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685506560 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1032192 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12937093120 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937097216 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:19.858 * Looking for test storage... 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=120702177280 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10883424256 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.858 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.859 20:20:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:28.070 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:28.070 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.070 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:28.071 Found net devices under 0000:31:00.0: cvl_0_0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:28.071 Found net devices under 0000:31:00.1: cvl_0_1 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:07:28.071 00:07:28.071 --- 10.0.0.2 ping statistics --- 00:07:28.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.071 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:07:28.071 00:07:28.071 --- 10.0.0.1 ping statistics --- 00:07:28.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.071 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.071 ************************************ 00:07:28.071 START TEST nvmf_filesystem_no_in_capsule 00:07:28.071 ************************************ 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2849098 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2849098 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2849098 ']' 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.071 20:20:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.071 [2024-05-13 20:20:43.812355] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:07:28.071 [2024-05-13 20:20:43.812409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.071 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.071 [2024-05-13 20:20:43.889130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.071 [2024-05-13 20:20:43.966277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.071 [2024-05-13 20:20:43.966321] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.071 [2024-05-13 20:20:43.966330] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.071 [2024-05-13 20:20:43.966336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.071 [2024-05-13 20:20:43.966342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.071 [2024-05-13 20:20:43.966521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.071 [2024-05-13 20:20:43.966637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.071 [2024-05-13 20:20:43.966754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.071 [2024-05-13 20:20:43.966757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 [2024-05-13 20:20:44.643933] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 [2024-05-13 20:20:44.774529] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:29.015 [2024-05-13 20:20:44.774790] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:29.015 { 00:07:29.015 "name": "Malloc1", 00:07:29.015 "aliases": [ 00:07:29.015 "f394db71-dc4b-4690-b2f3-ea683d2953d1" 00:07:29.015 ], 00:07:29.015 "product_name": "Malloc disk", 00:07:29.015 "block_size": 512, 00:07:29.015 "num_blocks": 1048576, 00:07:29.015 "uuid": "f394db71-dc4b-4690-b2f3-ea683d2953d1", 00:07:29.015 "assigned_rate_limits": { 00:07:29.015 "rw_ios_per_sec": 0, 00:07:29.015 "rw_mbytes_per_sec": 0, 00:07:29.015 "r_mbytes_per_sec": 0, 00:07:29.015 "w_mbytes_per_sec": 0 00:07:29.015 }, 00:07:29.015 "claimed": true, 00:07:29.015 "claim_type": "exclusive_write", 00:07:29.015 "zoned": false, 00:07:29.015 "supported_io_types": { 00:07:29.015 "read": true, 00:07:29.015 "write": true, 00:07:29.015 "unmap": true, 00:07:29.015 "write_zeroes": true, 00:07:29.015 "flush": true, 00:07:29.015 "reset": true, 00:07:29.015 "compare": false, 00:07:29.015 "compare_and_write": false, 00:07:29.015 "abort": true, 00:07:29.015 "nvme_admin": false, 00:07:29.015 "nvme_io": false 00:07:29.015 }, 00:07:29.015 "memory_domains": [ 00:07:29.015 { 00:07:29.015 "dma_device_id": "system", 00:07:29.015 "dma_device_type": 1 00:07:29.015 }, 00:07:29.015 { 00:07:29.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.015 "dma_device_type": 2 00:07:29.015 } 00:07:29.015 ], 00:07:29.015 "driver_specific": {} 00:07:29.015 } 00:07:29.015 ]' 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:29.015 20:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.930 20:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.930 20:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:30.930 20:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.930 20:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:30.930 20:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:32.847 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:33.108 20:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:33.367 20:20:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.309 ************************************ 00:07:34.309 START TEST filesystem_ext4 00:07:34.309 ************************************ 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:34.309 20:20:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:34.309 mke2fs 1.46.5 (30-Dec-2021) 00:07:34.309 Discarding device blocks: 0/522240 done 00:07:34.309 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:34.309 Filesystem UUID: 97b52121-9d71-47d3-b2ab-8b7375136a1d 00:07:34.309 Superblock backups stored on blocks: 00:07:34.309 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:34.309 00:07:34.309 Allocating group tables: 0/64 done 00:07:34.309 Writing inode tables: 0/64 done 00:07:37.609 Creating journal (8192 blocks): done 00:07:38.179 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:38.179 00:07:38.179 20:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:38.179 20:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.120 20:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.120 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:39.120 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.120 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:39.120 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:39.120 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2849098 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.381 00:07:39.381 real 0m4.921s 00:07:39.381 user 0m0.026s 00:07:39.381 sys 0m0.076s 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:39.381 ************************************ 00:07:39.381 END TEST filesystem_ext4 00:07:39.381 ************************************ 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.381 ************************************ 00:07:39.381 START TEST filesystem_btrfs 00:07:39.381 ************************************ 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:39.381 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:39.642 btrfs-progs v6.6.2 00:07:39.642 See https://btrfs.readthedocs.io for more information. 00:07:39.642 00:07:39.642 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:39.642 NOTE: several default settings have changed in version 5.15, please make sure 00:07:39.642 this does not affect your deployments: 00:07:39.642 - DUP for metadata (-m dup) 00:07:39.642 - enabled no-holes (-O no-holes) 00:07:39.642 - enabled free-space-tree (-R free-space-tree) 00:07:39.642 00:07:39.642 Label: (null) 00:07:39.642 UUID: d0db6683-c262-41cb-92cc-a41b40c9eb51 00:07:39.642 Node size: 16384 00:07:39.642 Sector size: 4096 00:07:39.642 Filesystem size: 510.00MiB 00:07:39.642 Block group profiles: 00:07:39.642 Data: single 8.00MiB 00:07:39.642 Metadata: DUP 32.00MiB 00:07:39.642 System: DUP 8.00MiB 00:07:39.642 SSD detected: yes 00:07:39.642 Zoned device: no 00:07:39.642 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:39.642 Runtime features: free-space-tree 00:07:39.642 Checksum: crc32c 00:07:39.642 Number of devices: 1 00:07:39.642 Devices: 00:07:39.642 ID SIZE PATH 00:07:39.642 1 510.00MiB /dev/nvme0n1p1 00:07:39.642 00:07:39.642 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:39.642 20:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.584 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.584 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:40.584 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.584 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:40.584 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2849098 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.585 00:07:40.585 real 0m1.198s 00:07:40.585 user 0m0.029s 00:07:40.585 sys 0m0.134s 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 ************************************ 00:07:40.585 END TEST filesystem_btrfs 00:07:40.585 ************************************ 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.585 ************************************ 00:07:40.585 START TEST filesystem_xfs 00:07:40.585 ************************************ 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:40.585 20:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:40.585 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:40.585 = sectsz=512 attr=2, projid32bit=1 00:07:40.585 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:40.585 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:40.585 data = bsize=4096 blocks=130560, imaxpct=25 00:07:40.585 = sunit=0 swidth=0 blks 00:07:40.585 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:40.585 log =internal log bsize=4096 blocks=16384, version=2 00:07:40.585 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:40.585 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:41.969 Discarding blocks...Done. 00:07:41.969 20:20:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:41.969 20:20:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2849098 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.884 00:07:43.884 real 0m3.206s 00:07:43.884 user 0m0.020s 00:07:43.884 sys 0m0.083s 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 ************************************ 00:07:43.884 END TEST filesystem_xfs 00:07:43.884 ************************************ 00:07:43.884 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.145 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:44.145 20:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.407 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.407 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:44.407 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:44.407 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.407 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2849098 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2849098 ']' 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2849098 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2849098 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2849098' 00:07:44.408 killing process with pid 2849098 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2849098 00:07:44.408 [2024-05-13 20:21:00.190898] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:44.408 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2849098 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:44.669 00:07:44.669 real 0m16.673s 00:07:44.669 user 1m5.775s 00:07:44.669 sys 0m1.320s 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.669 ************************************ 00:07:44.669 END TEST nvmf_filesystem_no_in_capsule 00:07:44.669 ************************************ 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.669 ************************************ 00:07:44.669 START TEST nvmf_filesystem_in_capsule 00:07:44.669 ************************************ 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2852726 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2852726 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2852726 ']' 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.669 20:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.669 [2024-05-13 20:21:00.571594] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:07:44.669 [2024-05-13 20:21:00.571651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.929 [2024-05-13 20:21:00.648552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.929 [2024-05-13 20:21:00.723034] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.929 [2024-05-13 20:21:00.723076] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.929 [2024-05-13 20:21:00.723084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.929 [2024-05-13 20:21:00.723090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.929 [2024-05-13 20:21:00.723095] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.929 [2024-05-13 20:21:00.723234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.929 [2024-05-13 20:21:00.723341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.929 [2024-05-13 20:21:00.723424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.929 [2024-05-13 20:21:00.723427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.500 [2024-05-13 20:21:01.396962] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.500 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 Malloc1 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 [2024-05-13 20:21:01.522198] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:45.761 [2024-05-13 20:21:01.522470] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:45.761 { 00:07:45.761 "name": "Malloc1", 00:07:45.761 "aliases": [ 00:07:45.761 "b042c863-27a0-4a99-bed3-08ff0512421c" 00:07:45.761 ], 00:07:45.761 "product_name": "Malloc disk", 00:07:45.761 "block_size": 512, 00:07:45.761 "num_blocks": 1048576, 00:07:45.761 "uuid": "b042c863-27a0-4a99-bed3-08ff0512421c", 00:07:45.761 "assigned_rate_limits": { 00:07:45.761 "rw_ios_per_sec": 0, 00:07:45.761 "rw_mbytes_per_sec": 0, 00:07:45.761 "r_mbytes_per_sec": 0, 00:07:45.761 "w_mbytes_per_sec": 0 00:07:45.761 }, 00:07:45.761 "claimed": true, 00:07:45.761 "claim_type": "exclusive_write", 00:07:45.761 "zoned": false, 00:07:45.761 "supported_io_types": { 00:07:45.761 "read": true, 00:07:45.761 "write": true, 00:07:45.761 "unmap": true, 00:07:45.761 "write_zeroes": true, 00:07:45.761 "flush": true, 00:07:45.761 "reset": true, 00:07:45.761 "compare": false, 00:07:45.761 "compare_and_write": false, 00:07:45.761 "abort": true, 00:07:45.761 "nvme_admin": false, 00:07:45.761 "nvme_io": false 00:07:45.761 }, 00:07:45.761 "memory_domains": [ 00:07:45.761 { 00:07:45.761 "dma_device_id": "system", 00:07:45.761 "dma_device_type": 1 00:07:45.761 }, 00:07:45.761 { 00:07:45.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.761 "dma_device_type": 2 00:07:45.761 } 00:07:45.761 ], 00:07:45.761 "driver_specific": {} 00:07:45.761 } 00:07:45.761 ]' 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:45.761 20:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:47.672 20:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.673 20:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:47.673 20:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.673 20:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:47.673 20:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:49.586 20:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:50.524 20:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.466 ************************************ 00:07:51.466 START TEST filesystem_in_capsule_ext4 00:07:51.466 ************************************ 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:51.466 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:51.466 mke2fs 1.46.5 (30-Dec-2021) 00:07:51.466 Discarding device blocks: 0/522240 done 00:07:51.466 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:51.466 Filesystem UUID: 4a40dea9-b57d-49f5-97cd-43a4bc72d18e 00:07:51.466 Superblock backups stored on blocks: 00:07:51.466 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:51.466 00:07:51.466 Allocating group tables: 0/64 done 00:07:51.726 Writing inode tables: 0/64 done 00:07:51.726 Creating journal (8192 blocks): done 00:07:51.986 Writing superblocks and filesystem accounting information: 0/64 done 00:07:51.986 00:07:51.986 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:51.986 20:21:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2852726 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.246 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.507 00:07:52.507 real 0m0.890s 00:07:52.507 user 0m0.026s 00:07:52.507 sys 0m0.069s 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:52.507 ************************************ 00:07:52.507 END TEST filesystem_in_capsule_ext4 00:07:52.507 ************************************ 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.507 ************************************ 00:07:52.507 START TEST filesystem_in_capsule_btrfs 00:07:52.507 ************************************ 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:52.507 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:52.768 btrfs-progs v6.6.2 00:07:52.768 See https://btrfs.readthedocs.io for more information. 00:07:52.768 00:07:52.768 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:52.768 NOTE: several default settings have changed in version 5.15, please make sure 00:07:52.768 this does not affect your deployments: 00:07:52.768 - DUP for metadata (-m dup) 00:07:52.768 - enabled no-holes (-O no-holes) 00:07:52.768 - enabled free-space-tree (-R free-space-tree) 00:07:52.768 00:07:52.768 Label: (null) 00:07:52.768 UUID: 23c3a5f2-8235-4c89-89fb-e8ce5297ceff 00:07:52.768 Node size: 16384 00:07:52.768 Sector size: 4096 00:07:52.768 Filesystem size: 510.00MiB 00:07:52.768 Block group profiles: 00:07:52.768 Data: single 8.00MiB 00:07:52.768 Metadata: DUP 32.00MiB 00:07:52.768 System: DUP 8.00MiB 00:07:52.768 SSD detected: yes 00:07:52.768 Zoned device: no 00:07:52.768 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:52.768 Runtime features: free-space-tree 00:07:52.768 Checksum: crc32c 00:07:52.768 Number of devices: 1 00:07:52.768 Devices: 00:07:52.768 ID SIZE PATH 00:07:52.768 1 510.00MiB /dev/nvme0n1p1 00:07:52.768 00:07:52.768 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:52.768 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.028 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.288 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:53.288 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.288 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:53.288 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:53.288 20:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2852726 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.288 00:07:53.288 real 0m0.753s 00:07:53.288 user 0m0.031s 00:07:53.288 sys 0m0.133s 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:53.288 ************************************ 00:07:53.288 END TEST filesystem_in_capsule_btrfs 00:07:53.288 ************************************ 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.288 ************************************ 00:07:53.288 START TEST filesystem_in_capsule_xfs 00:07:53.288 ************************************ 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:53.288 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:53.288 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:53.288 = sectsz=512 attr=2, projid32bit=1 00:07:53.288 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:53.288 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:53.288 data = bsize=4096 blocks=130560, imaxpct=25 00:07:53.288 = sunit=0 swidth=0 blks 00:07:53.288 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:53.288 log =internal log bsize=4096 blocks=16384, version=2 00:07:53.288 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:53.288 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:54.228 Discarding blocks...Done. 00:07:54.228 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:54.228 20:21:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2852726 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.140 00:07:56.140 real 0m2.770s 00:07:56.140 user 0m0.028s 00:07:56.140 sys 0m0.077s 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.140 ************************************ 00:07:56.140 END TEST filesystem_in_capsule_xfs 00:07:56.140 ************************************ 00:07:56.140 20:21:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:56.140 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:56.140 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2852726 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2852726 ']' 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2852726 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2852726 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2852726' 00:07:56.402 killing process with pid 2852726 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2852726 00:07:56.402 [2024-05-13 20:21:12.190766] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:56.402 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2852726 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.662 00:07:56.662 real 0m11.912s 00:07:56.662 user 0m46.845s 00:07:56.662 sys 0m1.264s 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.662 ************************************ 00:07:56.662 END TEST nvmf_filesystem_in_capsule 00:07:56.662 ************************************ 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.662 rmmod nvme_tcp 00:07:56.662 rmmod nvme_fabrics 00:07:56.662 rmmod nvme_keyring 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.662 20:21:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.659 20:21:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.659 00:07:58.659 real 0m39.247s 00:07:58.659 user 1m54.990s 00:07:58.659 sys 0m8.805s 00:07:58.659 20:21:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.659 20:21:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.659 ************************************ 00:07:58.659 END TEST nvmf_filesystem 00:07:58.659 ************************************ 00:07:58.919 20:21:14 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:58.919 20:21:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:58.919 20:21:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.919 20:21:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.919 ************************************ 00:07:58.919 START TEST nvmf_target_discovery 00:07:58.919 ************************************ 00:07:58.919 20:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:58.919 * Looking for test storage... 00:07:58.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.919 20:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.920 20:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.057 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.057 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.057 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.057 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.057 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:08:07.058 00:08:07.058 --- 10.0.0.2 ping statistics --- 00:08:07.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.058 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:08:07.058 00:08:07.058 --- 10.0.0.1 ping statistics --- 00:08:07.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.058 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2860516 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2860516 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2860516 ']' 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:07.058 20:21:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 [2024-05-13 20:21:22.981546] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:08:07.058 [2024-05-13 20:21:22.981612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.318 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.318 [2024-05-13 20:21:23.059707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.319 [2024-05-13 20:21:23.133733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.319 [2024-05-13 20:21:23.133776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.319 [2024-05-13 20:21:23.133784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.319 [2024-05-13 20:21:23.133791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.319 [2024-05-13 20:21:23.133796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.319 [2024-05-13 20:21:23.133937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.319 [2024-05-13 20:21:23.134064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.319 [2024-05-13 20:21:23.134208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.319 [2024-05-13 20:21:23.134211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.891 [2024-05-13 20:21:23.811907] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.891 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 Null1 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 [2024-05-13 20:21:23.872045] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:08.152 [2024-05-13 20:21:23.872251] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 Null2 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 Null3 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 Null4 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.152 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:08.414 00:08:08.414 Discovery Log Number of Records 6, Generation counter 6 00:08:08.414 =====Discovery Log Entry 0====== 00:08:08.414 trtype: tcp 00:08:08.414 adrfam: ipv4 00:08:08.414 subtype: current discovery subsystem 00:08:08.414 treq: not required 00:08:08.414 portid: 0 00:08:08.414 trsvcid: 4420 00:08:08.414 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:08.414 traddr: 10.0.0.2 00:08:08.414 eflags: explicit discovery connections, duplicate discovery information 00:08:08.414 sectype: none 00:08:08.414 =====Discovery Log Entry 1====== 00:08:08.414 trtype: tcp 00:08:08.414 adrfam: ipv4 00:08:08.414 subtype: nvme subsystem 00:08:08.414 treq: not required 00:08:08.414 portid: 0 00:08:08.414 trsvcid: 4420 00:08:08.414 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:08.414 traddr: 10.0.0.2 00:08:08.414 eflags: none 00:08:08.414 sectype: none 00:08:08.414 =====Discovery Log Entry 2====== 00:08:08.414 trtype: tcp 00:08:08.414 adrfam: ipv4 00:08:08.414 subtype: nvme subsystem 00:08:08.414 treq: not required 00:08:08.414 portid: 0 00:08:08.414 trsvcid: 4420 00:08:08.414 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:08.414 traddr: 10.0.0.2 00:08:08.414 eflags: none 00:08:08.414 sectype: none 00:08:08.414 =====Discovery Log Entry 3====== 00:08:08.414 trtype: tcp 00:08:08.414 adrfam: ipv4 00:08:08.414 subtype: nvme subsystem 00:08:08.414 treq: not required 00:08:08.414 portid: 0 00:08:08.414 trsvcid: 4420 00:08:08.414 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:08.414 traddr: 10.0.0.2 00:08:08.414 eflags: none 00:08:08.414 sectype: none 00:08:08.414 =====Discovery Log Entry 4====== 00:08:08.414 trtype: tcp 00:08:08.414 adrfam: ipv4 00:08:08.414 subtype: nvme subsystem 00:08:08.414 treq: not required 00:08:08.414 portid: 0 00:08:08.414 trsvcid: 4420 00:08:08.414 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:08.414 traddr: 10.0.0.2 00:08:08.414 eflags: none 00:08:08.414 sectype: none 00:08:08.414 =====Discovery Log Entry 5====== 00:08:08.414 trtype: tcp 00:08:08.414 adrfam: ipv4 00:08:08.414 subtype: discovery subsystem referral 00:08:08.414 treq: not required 00:08:08.414 portid: 0 00:08:08.414 trsvcid: 4430 00:08:08.414 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:08.414 traddr: 10.0.0.2 00:08:08.414 eflags: none 00:08:08.414 sectype: none 00:08:08.414 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:08.414 Perform nvmf subsystem discovery via RPC 00:08:08.414 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:08.414 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.414 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.414 [ 00:08:08.414 { 00:08:08.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:08.414 "subtype": "Discovery", 00:08:08.414 "listen_addresses": [ 00:08:08.414 { 00:08:08.414 "trtype": "TCP", 00:08:08.414 "adrfam": "IPv4", 00:08:08.414 "traddr": "10.0.0.2", 00:08:08.414 "trsvcid": "4420" 00:08:08.414 } 00:08:08.414 ], 00:08:08.414 "allow_any_host": true, 00:08:08.414 "hosts": [] 00:08:08.414 }, 00:08:08.414 { 00:08:08.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.414 "subtype": "NVMe", 00:08:08.414 "listen_addresses": [ 00:08:08.414 { 00:08:08.414 "trtype": "TCP", 00:08:08.414 "adrfam": "IPv4", 00:08:08.414 "traddr": "10.0.0.2", 00:08:08.414 "trsvcid": "4420" 00:08:08.414 } 00:08:08.414 ], 00:08:08.414 "allow_any_host": true, 00:08:08.414 "hosts": [], 00:08:08.414 "serial_number": "SPDK00000000000001", 00:08:08.414 "model_number": "SPDK bdev Controller", 00:08:08.414 "max_namespaces": 32, 00:08:08.414 "min_cntlid": 1, 00:08:08.414 "max_cntlid": 65519, 00:08:08.414 "namespaces": [ 00:08:08.414 { 00:08:08.414 "nsid": 1, 00:08:08.414 "bdev_name": "Null1", 00:08:08.414 "name": "Null1", 00:08:08.414 "nguid": "31056F3C62F64128A9352665EF6983C9", 00:08:08.414 "uuid": "31056f3c-62f6-4128-a935-2665ef6983c9" 00:08:08.414 } 00:08:08.414 ] 00:08:08.414 }, 00:08:08.414 { 00:08:08.414 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:08.414 "subtype": "NVMe", 00:08:08.414 "listen_addresses": [ 00:08:08.414 { 00:08:08.414 "trtype": "TCP", 00:08:08.414 "adrfam": "IPv4", 00:08:08.414 "traddr": "10.0.0.2", 00:08:08.414 "trsvcid": "4420" 00:08:08.414 } 00:08:08.414 ], 00:08:08.414 "allow_any_host": true, 00:08:08.414 "hosts": [], 00:08:08.414 "serial_number": "SPDK00000000000002", 00:08:08.414 "model_number": "SPDK bdev Controller", 00:08:08.414 "max_namespaces": 32, 00:08:08.414 "min_cntlid": 1, 00:08:08.414 "max_cntlid": 65519, 00:08:08.414 "namespaces": [ 00:08:08.414 { 00:08:08.414 "nsid": 1, 00:08:08.414 "bdev_name": "Null2", 00:08:08.414 "name": "Null2", 00:08:08.414 "nguid": "A2024594286C431483E6A86398581B45", 00:08:08.414 "uuid": "a2024594-286c-4314-83e6-a86398581b45" 00:08:08.414 } 00:08:08.414 ] 00:08:08.414 }, 00:08:08.414 { 00:08:08.414 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:08.414 "subtype": "NVMe", 00:08:08.414 "listen_addresses": [ 00:08:08.414 { 00:08:08.414 "trtype": "TCP", 00:08:08.414 "adrfam": "IPv4", 00:08:08.414 "traddr": "10.0.0.2", 00:08:08.414 "trsvcid": "4420" 00:08:08.414 } 00:08:08.414 ], 00:08:08.414 "allow_any_host": true, 00:08:08.414 "hosts": [], 00:08:08.414 "serial_number": "SPDK00000000000003", 00:08:08.414 "model_number": "SPDK bdev Controller", 00:08:08.414 "max_namespaces": 32, 00:08:08.414 "min_cntlid": 1, 00:08:08.414 "max_cntlid": 65519, 00:08:08.414 "namespaces": [ 00:08:08.415 { 00:08:08.415 "nsid": 1, 00:08:08.415 "bdev_name": "Null3", 00:08:08.415 "name": "Null3", 00:08:08.415 "nguid": "D69C5E082EBB4B849DF2E86815EC38FC", 00:08:08.415 "uuid": "d69c5e08-2ebb-4b84-9df2-e86815ec38fc" 00:08:08.415 } 00:08:08.415 ] 00:08:08.415 }, 00:08:08.415 { 00:08:08.415 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:08.415 "subtype": "NVMe", 00:08:08.415 "listen_addresses": [ 00:08:08.415 { 00:08:08.415 "trtype": "TCP", 00:08:08.415 "adrfam": "IPv4", 00:08:08.415 "traddr": "10.0.0.2", 00:08:08.415 "trsvcid": "4420" 00:08:08.415 } 00:08:08.415 ], 00:08:08.415 "allow_any_host": true, 00:08:08.415 "hosts": [], 00:08:08.415 "serial_number": "SPDK00000000000004", 00:08:08.415 "model_number": "SPDK bdev Controller", 00:08:08.415 "max_namespaces": 32, 00:08:08.415 "min_cntlid": 1, 00:08:08.415 "max_cntlid": 65519, 00:08:08.415 "namespaces": [ 00:08:08.415 { 00:08:08.415 "nsid": 1, 00:08:08.415 "bdev_name": "Null4", 00:08:08.415 "name": "Null4", 00:08:08.415 "nguid": "4880A3060EC44E558D24243A7D12A93E", 00:08:08.415 "uuid": "4880a306-0ec4-4e55-8d24-243a7d12a93e" 00:08:08.415 } 00:08:08.415 ] 00:08:08.415 } 00:08:08.415 ] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.415 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.415 rmmod nvme_tcp 00:08:08.677 rmmod nvme_fabrics 00:08:08.677 rmmod nvme_keyring 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2860516 ']' 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2860516 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2860516 ']' 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2860516 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2860516 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2860516' 00:08:08.677 killing process with pid 2860516 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2860516 00:08:08.677 [2024-05-13 20:21:24.452582] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2860516 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.677 20:21:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.221 20:21:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:11.221 00:08:11.221 real 0m11.974s 00:08:11.221 user 0m8.312s 00:08:11.221 sys 0m6.315s 00:08:11.221 20:21:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.221 20:21:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:11.221 ************************************ 00:08:11.221 END TEST nvmf_target_discovery 00:08:11.221 ************************************ 00:08:11.221 20:21:26 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.221 20:21:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:11.221 20:21:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.221 20:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.221 ************************************ 00:08:11.221 START TEST nvmf_referrals 00:08:11.221 ************************************ 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.221 * Looking for test storage... 00:08:11.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.221 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.222 20:21:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:19.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:19.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:19.365 Found net devices under 0000:31:00.0: cvl_0_0 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:19.365 Found net devices under 0000:31:00.1: cvl_0_1 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.365 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:08:19.366 00:08:19.366 --- 10.0.0.2 ping statistics --- 00:08:19.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.366 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:08:19.366 00:08:19.366 --- 10.0.0.1 ping statistics --- 00:08:19.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.366 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2865562 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2865562 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2865562 ']' 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.366 20:21:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.366 [2024-05-13 20:21:34.950265] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:08:19.366 [2024-05-13 20:21:34.950335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.366 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.366 [2024-05-13 20:21:35.027438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.366 [2024-05-13 20:21:35.101638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.366 [2024-05-13 20:21:35.101679] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.366 [2024-05-13 20:21:35.101687] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.366 [2024-05-13 20:21:35.101693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.366 [2024-05-13 20:21:35.101699] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.366 [2024-05-13 20:21:35.101843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.366 [2024-05-13 20:21:35.101976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.366 [2024-05-13 20:21:35.102101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.366 [2024-05-13 20:21:35.102104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 [2024-05-13 20:21:35.785957] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 [2024-05-13 20:21:35.801964] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:19.938 [2024-05-13 20:21:35.802175] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.938 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.200 20:21:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.462 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.723 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.983 20:21:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.243 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.504 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.765 rmmod nvme_tcp 00:08:21.765 rmmod nvme_fabrics 00:08:21.765 rmmod nvme_keyring 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2865562 ']' 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2865562 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2865562 ']' 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2865562 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2865562 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2865562' 00:08:21.765 killing process with pid 2865562 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2865562 00:08:21.765 [2024-05-13 20:21:37.614626] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:21.765 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2865562 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.027 20:21:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.942 20:21:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.942 00:08:23.942 real 0m13.080s 00:08:23.942 user 0m13.829s 00:08:23.942 sys 0m6.558s 00:08:23.942 20:21:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.942 20:21:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.942 ************************************ 00:08:23.942 END TEST nvmf_referrals 00:08:23.942 ************************************ 00:08:23.942 20:21:39 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:23.942 20:21:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:23.942 20:21:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.942 20:21:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.204 ************************************ 00:08:24.204 START TEST nvmf_connect_disconnect 00:08:24.204 ************************************ 00:08:24.204 20:21:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:24.204 * Looking for test storage... 00:08:24.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.204 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.204 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:24.204 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.204 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.205 20:21:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.400 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:32.401 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:32.401 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:32.401 Found net devices under 0000:31:00.0: cvl_0_0 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:32.401 Found net devices under 0000:31:00.1: cvl_0_1 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:08:32.401 00:08:32.401 --- 10.0.0.2 ping statistics --- 00:08:32.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.401 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:08:32.401 00:08:32.401 --- 10.0.0.1 ping statistics --- 00:08:32.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.401 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.401 20:21:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2870724 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2870724 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2870724 ']' 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.401 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:32.402 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.402 [2024-05-13 20:21:48.075550] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:08:32.402 [2024-05-13 20:21:48.075601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.402 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.402 [2024-05-13 20:21:48.148168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.402 [2024-05-13 20:21:48.216096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.402 [2024-05-13 20:21:48.216132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.402 [2024-05-13 20:21:48.216140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.402 [2024-05-13 20:21:48.216147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.402 [2024-05-13 20:21:48.216152] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.402 [2024-05-13 20:21:48.216285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.402 [2024-05-13 20:21:48.216414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.402 [2024-05-13 20:21:48.216498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.402 [2024-05-13 20:21:48.216501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.971 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 [2024-05-13 20:21:48.888920] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.972 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.972 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.972 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.972 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.231 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.231 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.232 [2024-05-13 20:21:48.948111] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:33.232 [2024-05-13 20:21:48.948360] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:33.232 20:21:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:35.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.018 rmmod nvme_tcp 00:12:26.018 rmmod nvme_fabrics 00:12:26.018 rmmod nvme_keyring 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2870724 ']' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2870724 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2870724 ']' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2870724 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2870724 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2870724' 00:12:26.018 killing process with pid 2870724 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2870724 00:12:26.018 [2024-05-13 20:25:41.609268] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2870724 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.018 20:25:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.932 20:25:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.932 00:12:27.932 real 4m3.913s 00:12:27.932 user 15m27.791s 00:12:27.932 sys 0m23.172s 00:12:27.932 20:25:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.932 20:25:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.932 ************************************ 00:12:27.932 END TEST nvmf_connect_disconnect 00:12:27.932 ************************************ 00:12:27.932 20:25:43 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:27.932 20:25:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:27.932 20:25:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.932 20:25:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.193 ************************************ 00:12:28.193 START TEST nvmf_multitarget 00:12:28.193 ************************************ 00:12:28.193 20:25:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:28.193 * Looking for test storage... 00:12:28.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.193 20:25:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.194 20:25:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:36.343 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:36.343 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:36.343 Found net devices under 0000:31:00.0: cvl_0_0 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:36.343 Found net devices under 0000:31:00.1: cvl_0_1 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.343 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:12:36.344 00:12:36.344 --- 10.0.0.2 ping statistics --- 00:12:36.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.344 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:12:36.344 00:12:36.344 --- 10.0.0.1 ping statistics --- 00:12:36.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.344 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2923098 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2923098 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2923098 ']' 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.344 20:25:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.344 [2024-05-13 20:25:51.845745] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:36.344 [2024-05-13 20:25:51.845810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.344 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.344 [2024-05-13 20:25:51.927988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.344 [2024-05-13 20:25:52.003893] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.344 [2024-05-13 20:25:52.003935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.344 [2024-05-13 20:25:52.003947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.344 [2024-05-13 20:25:52.003954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.344 [2024-05-13 20:25:52.003959] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.344 [2024-05-13 20:25:52.004095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.344 [2024-05-13 20:25:52.004196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.344 [2024-05-13 20:25:52.004348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.344 [2024-05-13 20:25:52.004352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:36.914 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:36.914 "nvmf_tgt_1" 00:12:37.174 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:37.174 "nvmf_tgt_2" 00:12:37.174 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.174 20:25:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:37.174 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:37.174 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:37.434 true 00:12:37.434 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:37.434 true 00:12:37.434 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.434 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.694 rmmod nvme_tcp 00:12:37.694 rmmod nvme_fabrics 00:12:37.694 rmmod nvme_keyring 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2923098 ']' 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2923098 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2923098 ']' 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2923098 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:37.694 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2923098 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2923098' 00:12:37.695 killing process with pid 2923098 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2923098 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2923098 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.695 20:25:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.253 20:25:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:40.253 00:12:40.253 real 0m11.796s 00:12:40.253 user 0m9.413s 00:12:40.253 sys 0m6.174s 00:12:40.253 20:25:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.253 20:25:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:40.253 ************************************ 00:12:40.253 END TEST nvmf_multitarget 00:12:40.253 ************************************ 00:12:40.253 20:25:55 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.253 20:25:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:40.253 20:25:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.253 20:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:40.253 ************************************ 00:12:40.253 START TEST nvmf_rpc 00:12:40.253 ************************************ 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.253 * Looking for test storage... 00:12:40.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.253 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:40.254 20:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:48.395 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:48.395 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:48.395 Found net devices under 0000:31:00.0: cvl_0_0 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:48.395 Found net devices under 0000:31:00.1: cvl_0_1 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.395 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:12:48.396 00:12:48.396 --- 10.0.0.2 ping statistics --- 00:12:48.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.396 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:12:48.396 00:12:48.396 --- 10.0.0.1 ping statistics --- 00:12:48.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.396 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2928106 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2928106 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2928106 ']' 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.396 20:26:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.396 [2024-05-13 20:26:03.832054] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:48.396 [2024-05-13 20:26:03.832120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.396 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.396 [2024-05-13 20:26:03.911126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.396 [2024-05-13 20:26:03.986680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.396 [2024-05-13 20:26:03.986722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.396 [2024-05-13 20:26:03.986730] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.396 [2024-05-13 20:26:03.986737] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.396 [2024-05-13 20:26:03.986742] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.396 [2024-05-13 20:26:03.986913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.396 [2024-05-13 20:26:03.987032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.396 [2024-05-13 20:26:03.987156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.396 [2024-05-13 20:26:03.987158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.966 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:48.966 "tick_rate": 2400000000, 00:12:48.966 "poll_groups": [ 00:12:48.966 { 00:12:48.966 "name": "nvmf_tgt_poll_group_000", 00:12:48.966 "admin_qpairs": 0, 00:12:48.966 "io_qpairs": 0, 00:12:48.966 "current_admin_qpairs": 0, 00:12:48.966 "current_io_qpairs": 0, 00:12:48.966 "pending_bdev_io": 0, 00:12:48.966 "completed_nvme_io": 0, 00:12:48.966 "transports": [] 00:12:48.966 }, 00:12:48.966 { 00:12:48.966 "name": "nvmf_tgt_poll_group_001", 00:12:48.966 "admin_qpairs": 0, 00:12:48.966 "io_qpairs": 0, 00:12:48.966 "current_admin_qpairs": 0, 00:12:48.966 "current_io_qpairs": 0, 00:12:48.966 "pending_bdev_io": 0, 00:12:48.966 "completed_nvme_io": 0, 00:12:48.966 "transports": [] 00:12:48.966 }, 00:12:48.966 { 00:12:48.966 "name": "nvmf_tgt_poll_group_002", 00:12:48.966 "admin_qpairs": 0, 00:12:48.966 "io_qpairs": 0, 00:12:48.966 "current_admin_qpairs": 0, 00:12:48.966 "current_io_qpairs": 0, 00:12:48.966 "pending_bdev_io": 0, 00:12:48.966 "completed_nvme_io": 0, 00:12:48.966 "transports": [] 00:12:48.966 }, 00:12:48.966 { 00:12:48.966 "name": "nvmf_tgt_poll_group_003", 00:12:48.966 "admin_qpairs": 0, 00:12:48.966 "io_qpairs": 0, 00:12:48.966 "current_admin_qpairs": 0, 00:12:48.966 "current_io_qpairs": 0, 00:12:48.966 "pending_bdev_io": 0, 00:12:48.966 "completed_nvme_io": 0, 00:12:48.967 "transports": [] 00:12:48.967 } 00:12:48.967 ] 00:12:48.967 }' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.967 [2024-05-13 20:26:04.778209] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:48.967 "tick_rate": 2400000000, 00:12:48.967 "poll_groups": [ 00:12:48.967 { 00:12:48.967 "name": "nvmf_tgt_poll_group_000", 00:12:48.967 "admin_qpairs": 0, 00:12:48.967 "io_qpairs": 0, 00:12:48.967 "current_admin_qpairs": 0, 00:12:48.967 "current_io_qpairs": 0, 00:12:48.967 "pending_bdev_io": 0, 00:12:48.967 "completed_nvme_io": 0, 00:12:48.967 "transports": [ 00:12:48.967 { 00:12:48.967 "trtype": "TCP" 00:12:48.967 } 00:12:48.967 ] 00:12:48.967 }, 00:12:48.967 { 00:12:48.967 "name": "nvmf_tgt_poll_group_001", 00:12:48.967 "admin_qpairs": 0, 00:12:48.967 "io_qpairs": 0, 00:12:48.967 "current_admin_qpairs": 0, 00:12:48.967 "current_io_qpairs": 0, 00:12:48.967 "pending_bdev_io": 0, 00:12:48.967 "completed_nvme_io": 0, 00:12:48.967 "transports": [ 00:12:48.967 { 00:12:48.967 "trtype": "TCP" 00:12:48.967 } 00:12:48.967 ] 00:12:48.967 }, 00:12:48.967 { 00:12:48.967 "name": "nvmf_tgt_poll_group_002", 00:12:48.967 "admin_qpairs": 0, 00:12:48.967 "io_qpairs": 0, 00:12:48.967 "current_admin_qpairs": 0, 00:12:48.967 "current_io_qpairs": 0, 00:12:48.967 "pending_bdev_io": 0, 00:12:48.967 "completed_nvme_io": 0, 00:12:48.967 "transports": [ 00:12:48.967 { 00:12:48.967 "trtype": "TCP" 00:12:48.967 } 00:12:48.967 ] 00:12:48.967 }, 00:12:48.967 { 00:12:48.967 "name": "nvmf_tgt_poll_group_003", 00:12:48.967 "admin_qpairs": 0, 00:12:48.967 "io_qpairs": 0, 00:12:48.967 "current_admin_qpairs": 0, 00:12:48.967 "current_io_qpairs": 0, 00:12:48.967 "pending_bdev_io": 0, 00:12:48.967 "completed_nvme_io": 0, 00:12:48.967 "transports": [ 00:12:48.967 { 00:12:48.967 "trtype": "TCP" 00:12:48.967 } 00:12:48.967 ] 00:12:48.967 } 00:12:48.967 ] 00:12:48.967 }' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.967 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.228 Malloc1 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.228 [2024-05-13 20:26:04.967288] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:49.228 [2024-05-13 20:26:04.967545] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.228 20:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:49.228 [2024-05-13 20:26:04.994274] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:49.228 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.229 could not add new controller: failed to write to nvme-fabrics device 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.229 20:26:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.612 20:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.612 20:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:50.612 20:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.612 20:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:50.612 20:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.153 [2024-05-13 20:26:08.719528] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:53.153 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.153 could not add new controller: failed to write to nvme-fabrics device 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.153 20:26:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.536 20:26:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.536 20:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:54.536 20:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.536 20:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:54.536 20:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:56.449 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 [2024-05-13 20:26:12.434421] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.736 20:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.130 20:26:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.130 20:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:58.130 20:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.130 20:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:58.130 20:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 [2024-05-13 20:26:16.179350] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.670 20:26:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.052 20:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.052 20:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:02.052 20:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.052 20:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:02.052 20:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.960 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.220 [2024-05-13 20:26:19.926903] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.220 20:26:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.603 20:26:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.603 20:26:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:05.603 20:26:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.603 20:26:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:05.603 20:26:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:07.514 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.774 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.775 [2024-05-13 20:26:23.637774] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.775 20:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.686 20:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.686 20:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:09.686 20:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.686 20:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:09.686 20:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:11.596 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 [2024-05-13 20:26:27.395949] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.597 20:26:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.984 20:26:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.984 20:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:12.984 20:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.984 20:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:12.984 20:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:15.530 20:26:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 [2024-05-13 20:26:31.107773] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 [2024-05-13 20:26:31.171910] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.530 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 [2024-05-13 20:26:31.232096] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 [2024-05-13 20:26:31.292300] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 [2024-05-13 20:26:31.352517] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:15.531 "tick_rate": 2400000000, 00:13:15.531 "poll_groups": [ 00:13:15.531 { 00:13:15.531 "name": "nvmf_tgt_poll_group_000", 00:13:15.531 "admin_qpairs": 0, 00:13:15.531 "io_qpairs": 224, 00:13:15.531 "current_admin_qpairs": 0, 00:13:15.531 "current_io_qpairs": 0, 00:13:15.531 "pending_bdev_io": 0, 00:13:15.531 "completed_nvme_io": 226, 00:13:15.531 "transports": [ 00:13:15.531 { 00:13:15.531 "trtype": "TCP" 00:13:15.531 } 00:13:15.531 ] 00:13:15.531 }, 00:13:15.531 { 00:13:15.531 "name": "nvmf_tgt_poll_group_001", 00:13:15.531 "admin_qpairs": 1, 00:13:15.531 "io_qpairs": 223, 00:13:15.531 "current_admin_qpairs": 0, 00:13:15.531 "current_io_qpairs": 0, 00:13:15.531 "pending_bdev_io": 0, 00:13:15.531 "completed_nvme_io": 296, 00:13:15.531 "transports": [ 00:13:15.531 { 00:13:15.531 "trtype": "TCP" 00:13:15.531 } 00:13:15.531 ] 00:13:15.531 }, 00:13:15.531 { 00:13:15.531 "name": "nvmf_tgt_poll_group_002", 00:13:15.531 "admin_qpairs": 6, 00:13:15.531 "io_qpairs": 218, 00:13:15.531 "current_admin_qpairs": 0, 00:13:15.531 "current_io_qpairs": 0, 00:13:15.531 "pending_bdev_io": 0, 00:13:15.531 "completed_nvme_io": 489, 00:13:15.531 "transports": [ 00:13:15.531 { 00:13:15.531 "trtype": "TCP" 00:13:15.531 } 00:13:15.531 ] 00:13:15.531 }, 00:13:15.531 { 00:13:15.531 "name": "nvmf_tgt_poll_group_003", 00:13:15.531 "admin_qpairs": 0, 00:13:15.531 "io_qpairs": 224, 00:13:15.531 "current_admin_qpairs": 0, 00:13:15.531 "current_io_qpairs": 0, 00:13:15.531 "pending_bdev_io": 0, 00:13:15.531 "completed_nvme_io": 228, 00:13:15.531 "transports": [ 00:13:15.531 { 00:13:15.531 "trtype": "TCP" 00:13:15.531 } 00:13:15.531 ] 00:13:15.531 } 00:13:15.531 ] 00:13:15.531 }' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:15.531 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.793 rmmod nvme_tcp 00:13:15.793 rmmod nvme_fabrics 00:13:15.793 rmmod nvme_keyring 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2928106 ']' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2928106 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2928106 ']' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2928106 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2928106 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2928106' 00:13:15.793 killing process with pid 2928106 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2928106 00:13:15.793 [2024-05-13 20:26:31.630986] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:15.793 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2928106 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.054 20:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.975 20:26:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.976 00:13:17.976 real 0m38.054s 00:13:17.976 user 1m53.311s 00:13:17.976 sys 0m7.614s 00:13:17.976 20:26:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.976 20:26:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.976 ************************************ 00:13:17.976 END TEST nvmf_rpc 00:13:17.976 ************************************ 00:13:17.976 20:26:33 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.976 20:26:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:17.976 20:26:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.976 20:26:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.244 ************************************ 00:13:18.244 START TEST nvmf_invalid 00:13:18.244 ************************************ 00:13:18.244 20:26:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:18.244 * Looking for test storage... 00:13:18.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.244 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.245 20:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:26.385 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:26.385 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:26.385 Found net devices under 0000:31:00.0: cvl_0_0 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:26.385 Found net devices under 0000:31:00.1: cvl_0_1 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.385 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:13:26.386 00:13:26.386 --- 10.0.0.2 ping statistics --- 00:13:26.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.386 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.486 ms 00:13:26.386 00:13:26.386 --- 10.0.0.1 ping statistics --- 00:13:26.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.386 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2938456 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2938456 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2938456 ']' 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:26.386 20:26:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 [2024-05-13 20:26:41.940413] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:26.386 [2024-05-13 20:26:41.940473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.386 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.386 [2024-05-13 20:26:42.023013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.386 [2024-05-13 20:26:42.098809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.386 [2024-05-13 20:26:42.098853] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.386 [2024-05-13 20:26:42.098860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.386 [2024-05-13 20:26:42.098867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.386 [2024-05-13 20:26:42.098873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.386 [2024-05-13 20:26:42.099047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.386 [2024-05-13 20:26:42.099151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.386 [2024-05-13 20:26:42.099304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.386 [2024-05-13 20:26:42.099307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:26.955 20:26:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1751 00:13:27.214 [2024-05-13 20:26:42.909261] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:27.214 20:26:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:27.214 { 00:13:27.214 "nqn": "nqn.2016-06.io.spdk:cnode1751", 00:13:27.214 "tgt_name": "foobar", 00:13:27.214 "method": "nvmf_create_subsystem", 00:13:27.214 "req_id": 1 00:13:27.214 } 00:13:27.214 Got JSON-RPC error response 00:13:27.214 response: 00:13:27.214 { 00:13:27.214 "code": -32603, 00:13:27.214 "message": "Unable to find target foobar" 00:13:27.214 }' 00:13:27.214 20:26:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:27.214 { 00:13:27.214 "nqn": "nqn.2016-06.io.spdk:cnode1751", 00:13:27.215 "tgt_name": "foobar", 00:13:27.215 "method": "nvmf_create_subsystem", 00:13:27.215 "req_id": 1 00:13:27.215 } 00:13:27.215 Got JSON-RPC error response 00:13:27.215 response: 00:13:27.215 { 00:13:27.215 "code": -32603, 00:13:27.215 "message": "Unable to find target foobar" 00:13:27.215 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:27.215 20:26:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:27.215 20:26:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21237 00:13:27.215 [2024-05-13 20:26:43.081837] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21237: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:27.215 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:27.215 { 00:13:27.215 "nqn": "nqn.2016-06.io.spdk:cnode21237", 00:13:27.215 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.215 "method": "nvmf_create_subsystem", 00:13:27.215 "req_id": 1 00:13:27.215 } 00:13:27.215 Got JSON-RPC error response 00:13:27.215 response: 00:13:27.215 { 00:13:27.215 "code": -32602, 00:13:27.215 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.215 }' 00:13:27.215 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:27.215 { 00:13:27.215 "nqn": "nqn.2016-06.io.spdk:cnode21237", 00:13:27.215 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.215 "method": "nvmf_create_subsystem", 00:13:27.215 "req_id": 1 00:13:27.215 } 00:13:27.215 Got JSON-RPC error response 00:13:27.215 response: 00:13:27.215 { 00:13:27.215 "code": -32602, 00:13:27.215 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.215 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:27.215 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:27.215 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18816 00:13:27.474 [2024-05-13 20:26:43.254453] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18816: invalid model number 'SPDK_Controller' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:27.474 { 00:13:27.474 "nqn": "nqn.2016-06.io.spdk:cnode18816", 00:13:27.474 "model_number": "SPDK_Controller\u001f", 00:13:27.474 "method": "nvmf_create_subsystem", 00:13:27.474 "req_id": 1 00:13:27.474 } 00:13:27.474 Got JSON-RPC error response 00:13:27.474 response: 00:13:27.474 { 00:13:27.474 "code": -32602, 00:13:27.474 "message": "Invalid MN SPDK_Controller\u001f" 00:13:27.474 }' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:27.474 { 00:13:27.474 "nqn": "nqn.2016-06.io.spdk:cnode18816", 00:13:27.474 "model_number": "SPDK_Controller\u001f", 00:13:27.474 "method": "nvmf_create_subsystem", 00:13:27.474 "req_id": 1 00:13:27.474 } 00:13:27.474 Got JSON-RPC error response 00:13:27.474 response: 00:13:27.474 { 00:13:27.474 "code": -32602, 00:13:27.474 "message": "Invalid MN SPDK_Controller\u001f" 00:13:27.474 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.474 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:27.475 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NH`F:j'\''nj0i[j~tF}]T' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'NH`F:j'\''nj0i[j~tF}]T' nqn.2016-06.io.spdk:cnode31145 00:13:27.735 [2024-05-13 20:26:43.591530] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31145: invalid serial number 'NH`F:j'nj0i[j~tF}]T' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:27.735 { 00:13:27.735 "nqn": "nqn.2016-06.io.spdk:cnode31145", 00:13:27.735 "serial_number": "NH`F:\u007fj'\''nj0i[j~tF}]T\u007f", 00:13:27.735 "method": "nvmf_create_subsystem", 00:13:27.735 "req_id": 1 00:13:27.735 } 00:13:27.735 Got JSON-RPC error response 00:13:27.735 response: 00:13:27.735 { 00:13:27.735 "code": -32602, 00:13:27.735 "message": "Invalid SN NH`F:\u007fj'\''nj0i[j~tF}]T\u007f" 00:13:27.735 }' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:27.735 { 00:13:27.735 "nqn": "nqn.2016-06.io.spdk:cnode31145", 00:13:27.735 "serial_number": "NH`F:\u007fj'nj0i[j~tF}]T\u007f", 00:13:27.735 "method": "nvmf_create_subsystem", 00:13:27.735 "req_id": 1 00:13:27.735 } 00:13:27.735 Got JSON-RPC error response 00:13:27.735 response: 00:13:27.735 { 00:13:27.735 "code": -32602, 00:13:27.735 "message": "Invalid SN NH`F:\u007fj'nj0i[j~tF}]T\u007f" 00:13:27.735 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.735 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:27.736 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:27.736 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:27.736 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.736 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:27.996 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '`7:7tf\'\''TQM?{p{xKN4<\!;(FXOj?+wswD:n+&:f' 00:13:27.997 20:26:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '`7:7tf\'\''TQM?{p{xKN4<\!;(FXOj?+wswD:n+&:f' nqn.2016-06.io.spdk:cnode32293 00:13:28.257 [2024-05-13 20:26:44.077089] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32293: invalid model number '`7:7tf\'TQM?{p{xKN4<\!;(FXOj?+wswD:n+&:f' 00:13:28.257 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:28.257 { 00:13:28.257 "nqn": "nqn.2016-06.io.spdk:cnode32293", 00:13:28.257 "model_number": "`7:7tf\\'\''TQM?{p{xKN4<\\!;(FX\u007fOj?+wswD:n+&:f", 00:13:28.257 "method": "nvmf_create_subsystem", 00:13:28.257 "req_id": 1 00:13:28.257 } 00:13:28.257 Got JSON-RPC error response 00:13:28.257 response: 00:13:28.257 { 00:13:28.257 "code": -32602, 00:13:28.257 "message": "Invalid MN `7:7tf\\'\''TQM?{p{xKN4<\\!;(FX\u007fOj?+wswD:n+&:f" 00:13:28.257 }' 00:13:28.257 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:28.257 { 00:13:28.257 "nqn": "nqn.2016-06.io.spdk:cnode32293", 00:13:28.257 "model_number": "`7:7tf\\'TQM?{p{xKN4<\\!;(FX\u007fOj?+wswD:n+&:f", 00:13:28.257 "method": "nvmf_create_subsystem", 00:13:28.257 "req_id": 1 00:13:28.257 } 00:13:28.257 Got JSON-RPC error response 00:13:28.257 response: 00:13:28.257 { 00:13:28.257 "code": -32602, 00:13:28.257 "message": "Invalid MN `7:7tf\\'TQM?{p{xKN4<\\!;(FX\u007fOj?+wswD:n+&:f" 00:13:28.257 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:28.257 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:28.518 [2024-05-13 20:26:44.249738] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.518 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:28.518 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:28.518 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:28.518 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:28.779 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:28.779 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:28.779 [2024-05-13 20:26:44.606870] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:28.779 [2024-05-13 20:26:44.606937] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:28.780 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:28.780 { 00:13:28.780 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:28.780 "listen_address": { 00:13:28.780 "trtype": "tcp", 00:13:28.780 "traddr": "", 00:13:28.780 "trsvcid": "4421" 00:13:28.780 }, 00:13:28.780 "method": "nvmf_subsystem_remove_listener", 00:13:28.780 "req_id": 1 00:13:28.780 } 00:13:28.780 Got JSON-RPC error response 00:13:28.780 response: 00:13:28.780 { 00:13:28.780 "code": -32602, 00:13:28.780 "message": "Invalid parameters" 00:13:28.780 }' 00:13:28.780 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:28.780 { 00:13:28.780 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:28.780 "listen_address": { 00:13:28.780 "trtype": "tcp", 00:13:28.780 "traddr": "", 00:13:28.780 "trsvcid": "4421" 00:13:28.780 }, 00:13:28.780 "method": "nvmf_subsystem_remove_listener", 00:13:28.780 "req_id": 1 00:13:28.780 } 00:13:28.780 Got JSON-RPC error response 00:13:28.780 response: 00:13:28.780 { 00:13:28.780 "code": -32602, 00:13:28.780 "message": "Invalid parameters" 00:13:28.780 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:28.780 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26859 -i 0 00:13:29.040 [2024-05-13 20:26:44.779439] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26859: invalid cntlid range [0-65519] 00:13:29.040 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:29.040 { 00:13:29.040 "nqn": "nqn.2016-06.io.spdk:cnode26859", 00:13:29.040 "min_cntlid": 0, 00:13:29.040 "method": "nvmf_create_subsystem", 00:13:29.040 "req_id": 1 00:13:29.040 } 00:13:29.040 Got JSON-RPC error response 00:13:29.040 response: 00:13:29.040 { 00:13:29.040 "code": -32602, 00:13:29.040 "message": "Invalid cntlid range [0-65519]" 00:13:29.040 }' 00:13:29.040 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:29.040 { 00:13:29.040 "nqn": "nqn.2016-06.io.spdk:cnode26859", 00:13:29.040 "min_cntlid": 0, 00:13:29.040 "method": "nvmf_create_subsystem", 00:13:29.040 "req_id": 1 00:13:29.040 } 00:13:29.040 Got JSON-RPC error response 00:13:29.040 response: 00:13:29.040 { 00:13:29.040 "code": -32602, 00:13:29.040 "message": "Invalid cntlid range [0-65519]" 00:13:29.040 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.040 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14046 -i 65520 00:13:29.040 [2024-05-13 20:26:44.952009] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14046: invalid cntlid range [65520-65519] 00:13:29.040 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:29.040 { 00:13:29.040 "nqn": "nqn.2016-06.io.spdk:cnode14046", 00:13:29.040 "min_cntlid": 65520, 00:13:29.040 "method": "nvmf_create_subsystem", 00:13:29.040 "req_id": 1 00:13:29.040 } 00:13:29.040 Got JSON-RPC error response 00:13:29.040 response: 00:13:29.040 { 00:13:29.040 "code": -32602, 00:13:29.040 "message": "Invalid cntlid range [65520-65519]" 00:13:29.040 }' 00:13:29.040 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:29.040 { 00:13:29.040 "nqn": "nqn.2016-06.io.spdk:cnode14046", 00:13:29.040 "min_cntlid": 65520, 00:13:29.040 "method": "nvmf_create_subsystem", 00:13:29.040 "req_id": 1 00:13:29.040 } 00:13:29.040 Got JSON-RPC error response 00:13:29.040 response: 00:13:29.040 { 00:13:29.040 "code": -32602, 00:13:29.040 "message": "Invalid cntlid range [65520-65519]" 00:13:29.040 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.301 20:26:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3130 -I 0 00:13:29.301 [2024-05-13 20:26:45.124610] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3130: invalid cntlid range [1-0] 00:13:29.301 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:29.301 { 00:13:29.301 "nqn": "nqn.2016-06.io.spdk:cnode3130", 00:13:29.301 "max_cntlid": 0, 00:13:29.301 "method": "nvmf_create_subsystem", 00:13:29.301 "req_id": 1 00:13:29.301 } 00:13:29.301 Got JSON-RPC error response 00:13:29.301 response: 00:13:29.301 { 00:13:29.301 "code": -32602, 00:13:29.301 "message": "Invalid cntlid range [1-0]" 00:13:29.301 }' 00:13:29.301 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:29.301 { 00:13:29.301 "nqn": "nqn.2016-06.io.spdk:cnode3130", 00:13:29.301 "max_cntlid": 0, 00:13:29.301 "method": "nvmf_create_subsystem", 00:13:29.301 "req_id": 1 00:13:29.301 } 00:13:29.301 Got JSON-RPC error response 00:13:29.301 response: 00:13:29.301 { 00:13:29.301 "code": -32602, 00:13:29.301 "message": "Invalid cntlid range [1-0]" 00:13:29.301 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.301 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24305 -I 65520 00:13:29.561 [2024-05-13 20:26:45.297207] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24305: invalid cntlid range [1-65520] 00:13:29.561 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:29.561 { 00:13:29.561 "nqn": "nqn.2016-06.io.spdk:cnode24305", 00:13:29.561 "max_cntlid": 65520, 00:13:29.561 "method": "nvmf_create_subsystem", 00:13:29.561 "req_id": 1 00:13:29.561 } 00:13:29.561 Got JSON-RPC error response 00:13:29.561 response: 00:13:29.561 { 00:13:29.561 "code": -32602, 00:13:29.561 "message": "Invalid cntlid range [1-65520]" 00:13:29.561 }' 00:13:29.561 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:29.561 { 00:13:29.561 "nqn": "nqn.2016-06.io.spdk:cnode24305", 00:13:29.561 "max_cntlid": 65520, 00:13:29.561 "method": "nvmf_create_subsystem", 00:13:29.561 "req_id": 1 00:13:29.561 } 00:13:29.561 Got JSON-RPC error response 00:13:29.561 response: 00:13:29.561 { 00:13:29.561 "code": -32602, 00:13:29.561 "message": "Invalid cntlid range [1-65520]" 00:13:29.561 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.561 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12266 -i 6 -I 5 00:13:29.561 [2024-05-13 20:26:45.469768] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12266: invalid cntlid range [6-5] 00:13:29.561 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:29.561 { 00:13:29.561 "nqn": "nqn.2016-06.io.spdk:cnode12266", 00:13:29.561 "min_cntlid": 6, 00:13:29.561 "max_cntlid": 5, 00:13:29.561 "method": "nvmf_create_subsystem", 00:13:29.561 "req_id": 1 00:13:29.561 } 00:13:29.561 Got JSON-RPC error response 00:13:29.561 response: 00:13:29.561 { 00:13:29.561 "code": -32602, 00:13:29.561 "message": "Invalid cntlid range [6-5]" 00:13:29.561 }' 00:13:29.561 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:29.561 { 00:13:29.561 "nqn": "nqn.2016-06.io.spdk:cnode12266", 00:13:29.561 "min_cntlid": 6, 00:13:29.561 "max_cntlid": 5, 00:13:29.561 "method": "nvmf_create_subsystem", 00:13:29.561 "req_id": 1 00:13:29.561 } 00:13:29.561 Got JSON-RPC error response 00:13:29.561 response: 00:13:29.561 { 00:13:29.561 "code": -32602, 00:13:29.561 "message": "Invalid cntlid range [6-5]" 00:13:29.561 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.561 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:29.821 { 00:13:29.821 "name": "foobar", 00:13:29.821 "method": "nvmf_delete_target", 00:13:29.821 "req_id": 1 00:13:29.821 } 00:13:29.821 Got JSON-RPC error response 00:13:29.821 response: 00:13:29.821 { 00:13:29.821 "code": -32602, 00:13:29.821 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:29.821 }' 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:29.821 { 00:13:29.821 "name": "foobar", 00:13:29.821 "method": "nvmf_delete_target", 00:13:29.821 "req_id": 1 00:13:29.821 } 00:13:29.821 Got JSON-RPC error response 00:13:29.821 response: 00:13:29.821 { 00:13:29.821 "code": -32602, 00:13:29.821 "message": "The specified target doesn't exist, cannot delete it." 00:13:29.821 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.821 rmmod nvme_tcp 00:13:29.821 rmmod nvme_fabrics 00:13:29.821 rmmod nvme_keyring 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2938456 ']' 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2938456 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 2938456 ']' 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 2938456 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2938456 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2938456' 00:13:29.821 killing process with pid 2938456 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 2938456 00:13:29.821 [2024-05-13 20:26:45.729566] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:29.821 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 2938456 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.081 20:26:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.046 20:26:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.046 00:13:32.046 real 0m13.999s 00:13:32.046 user 0m19.390s 00:13:32.046 sys 0m6.714s 00:13:32.046 20:26:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.046 20:26:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:32.046 ************************************ 00:13:32.046 END TEST nvmf_invalid 00:13:32.046 ************************************ 00:13:32.046 20:26:47 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:32.046 20:26:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:32.046 20:26:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.046 20:26:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.319 ************************************ 00:13:32.319 START TEST nvmf_abort 00:13:32.319 ************************************ 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:32.319 * Looking for test storage... 00:13:32.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.319 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.320 20:26:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.320 20:26:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.320 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.320 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.320 20:26:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.320 20:26:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:40.461 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:40.461 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:40.461 Found net devices under 0000:31:00.0: cvl_0_0 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:40.461 Found net devices under 0000:31:00.1: cvl_0_1 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.461 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:13:40.462 00:13:40.462 --- 10.0.0.2 ping statistics --- 00:13:40.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.462 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:13:40.462 00:13:40.462 --- 10.0.0.1 ping statistics --- 00:13:40.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.462 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2944029 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2944029 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2944029 ']' 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:40.462 20:26:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.462 [2024-05-13 20:26:55.927823] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:40.462 [2024-05-13 20:26:55.927889] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.462 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.462 [2024-05-13 20:26:56.023434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.462 [2024-05-13 20:26:56.117413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.462 [2024-05-13 20:26:56.117478] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.462 [2024-05-13 20:26:56.117493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.462 [2024-05-13 20:26:56.117500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.462 [2024-05-13 20:26:56.117506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.462 [2024-05-13 20:26:56.117647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.462 [2024-05-13 20:26:56.117794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.462 [2024-05-13 20:26:56.117798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 [2024-05-13 20:26:56.759333] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 Malloc0 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 Delay0 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 [2024-05-13 20:26:56.838519] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:41.033 [2024-05-13 20:26:56.838752] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.033 20:26:56 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:41.033 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.033 [2024-05-13 20:26:56.917629] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:43.579 Initializing NVMe Controllers 00:13:43.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:43.579 controller IO queue size 128 less than required 00:13:43.579 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:43.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:43.579 Initialization complete. Launching workers. 00:13:43.579 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33528 00:13:43.579 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33589, failed to submit 62 00:13:43.579 success 33532, unsuccess 57, failed 0 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.579 20:26:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.579 rmmod nvme_tcp 00:13:43.579 rmmod nvme_fabrics 00:13:43.579 rmmod nvme_keyring 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2944029 ']' 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2944029 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2944029 ']' 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2944029 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2944029 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2944029' 00:13:43.579 killing process with pid 2944029 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2944029 00:13:43.579 [2024-05-13 20:26:59.097307] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2944029 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.579 20:26:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.493 20:27:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.493 00:13:45.493 real 0m13.299s 00:13:45.493 user 0m13.243s 00:13:45.493 sys 0m6.537s 00:13:45.493 20:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.493 20:27:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:45.493 ************************************ 00:13:45.493 END TEST nvmf_abort 00:13:45.493 ************************************ 00:13:45.493 20:27:01 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:45.493 20:27:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:45.493 20:27:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.493 20:27:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.493 ************************************ 00:13:45.493 START TEST nvmf_ns_hotplug_stress 00:13:45.493 ************************************ 00:13:45.493 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:45.753 * Looking for test storage... 00:13:45.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.753 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.754 20:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:53.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:53.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:53.897 Found net devices under 0000:31:00.0: cvl_0_0 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:53.897 Found net devices under 0000:31:00.1: cvl_0_1 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.897 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:13:53.898 00:13:53.898 --- 10.0.0.2 ping statistics --- 00:13:53.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.898 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:13:53.898 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:13:54.159 00:13:54.159 --- 10.0.0.1 ping statistics --- 00:13:54.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.159 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2949979 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2949979 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2949979 ']' 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.159 20:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.159 [2024-05-13 20:27:09.937537] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:54.159 [2024-05-13 20:27:09.937599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.159 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.159 [2024-05-13 20:27:10.032303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.420 [2024-05-13 20:27:10.130705] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.420 [2024-05-13 20:27:10.130770] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.420 [2024-05-13 20:27:10.130779] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.420 [2024-05-13 20:27:10.130786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.420 [2024-05-13 20:27:10.130791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.420 [2024-05-13 20:27:10.130956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.420 [2024-05-13 20:27:10.131100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.420 [2024-05-13 20:27:10.131103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:54.992 [2024-05-13 20:27:10.896474] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.992 20:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:55.253 20:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.515 [2024-05-13 20:27:11.229621] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:55.515 [2024-05-13 20:27:11.229880] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.515 20:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.515 20:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:55.776 Malloc0 00:13:55.776 20:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:56.037 Delay0 00:13:56.037 20:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.037 20:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:56.298 NULL1 00:13:56.298 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:56.559 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:56.559 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2950468 00:13:56.559 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:56.559 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.559 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.559 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.820 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:56.820 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:57.081 true 00:13:57.081 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:57.081 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.081 20:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.342 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:57.342 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:57.342 true 00:13:57.604 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:57.604 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.604 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.865 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:57.865 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:57.865 true 00:13:57.865 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:57.865 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.125 20:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.385 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:58.385 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:58.385 true 00:13:58.385 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:58.385 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.646 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.907 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:58.907 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:58.907 true 00:13:58.907 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:58.907 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.166 20:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.428 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:59.428 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:59.428 true 00:13:59.428 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:59.428 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.688 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.948 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:59.948 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:59.948 true 00:13:59.948 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:13:59.948 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.208 20:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.208 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:00.208 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:00.469 true 00:14:00.469 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:00.469 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.729 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.729 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:00.729 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:00.991 true 00:14:00.991 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:00.991 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.252 20:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.252 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:01.252 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:01.513 true 00:14:01.513 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:01.513 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.774 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.774 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:01.774 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:02.043 true 00:14:02.043 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:02.043 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.043 20:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.303 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:02.303 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:02.564 true 00:14:02.564 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:02.564 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.564 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.824 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:02.824 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:03.084 true 00:14:03.084 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:03.085 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.085 20:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.345 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:03.345 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:03.345 true 00:14:03.605 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:03.605 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.605 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.866 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:03.866 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:03.866 true 00:14:03.866 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:03.866 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.126 20:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.387 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:04.387 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:04.387 true 00:14:04.387 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:04.387 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.647 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.908 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:04.908 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:04.908 true 00:14:04.908 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:04.908 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.169 20:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.429 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:05.429 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:05.429 true 00:14:05.429 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:05.429 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.689 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.689 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:05.689 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:05.949 true 00:14:05.950 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:05.950 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.210 20:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.210 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:06.210 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:06.470 true 00:14:06.470 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:06.470 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.760 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.760 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:06.760 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:07.036 true 00:14:07.036 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:07.036 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.036 20:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.297 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:07.297 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:07.557 true 00:14:07.557 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:07.557 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.557 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.818 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:07.818 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:07.818 true 00:14:08.079 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:08.079 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.079 20:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.339 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:08.339 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:08.339 true 00:14:08.339 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:08.339 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.599 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.860 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:08.860 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:08.860 true 00:14:08.860 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:08.860 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.121 20:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.382 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:09.382 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:09.382 true 00:14:09.382 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:09.382 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.644 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.644 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:09.644 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:09.906 true 00:14:09.906 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:09.906 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.167 20:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.167 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:10.167 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:10.429 true 00:14:10.429 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:10.429 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.690 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.690 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:10.690 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:10.952 true 00:14:10.952 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:10.952 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.952 20:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.213 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:11.213 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:11.474 true 00:14:11.474 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:11.474 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.474 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.735 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:11.735 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:11.996 true 00:14:11.996 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:11.996 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.996 20:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.257 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:12.257 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:12.257 true 00:14:12.518 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:12.518 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.518 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.779 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:12.779 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:12.779 true 00:14:12.779 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:12.779 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.040 20:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.300 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:13.300 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:13.300 true 00:14:13.300 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:13.300 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.561 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.821 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:13.821 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:13.821 true 00:14:13.821 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:13.821 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.082 20:27:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.342 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:14.342 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:14.342 true 00:14:14.342 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:14.342 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.603 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.863 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:14.863 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:14.863 true 00:14:14.863 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:14.863 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.123 20:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.123 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:15.123 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:15.383 true 00:14:15.383 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:15.383 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.643 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.643 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:15.643 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:15.904 true 00:14:15.904 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:15.904 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.904 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.164 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:16.164 20:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:16.424 true 00:14:16.424 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:16.424 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.424 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.684 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:16.684 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:16.945 true 00:14:16.945 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:16.945 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.945 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.205 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:17.205 20:27:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:17.205 true 00:14:17.466 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:17.466 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.466 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.726 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:17.726 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:17.726 true 00:14:17.726 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:17.726 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.000 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.265 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:18.265 20:27:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:18.265 true 00:14:18.265 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:18.265 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.526 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.787 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:18.787 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:18.787 true 00:14:18.787 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:18.787 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.046 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.046 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:19.046 20:27:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:19.307 true 00:14:19.307 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:19.308 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.567 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.567 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:19.567 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:19.826 true 00:14:19.826 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:19.826 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.087 20:27:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.087 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:20.087 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:20.347 true 00:14:20.347 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:20.347 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.607 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.607 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:20.607 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:20.867 true 00:14:20.867 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:20.867 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.129 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.129 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:21.129 20:27:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:21.390 true 00:14:21.390 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:21.390 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.391 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.652 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:21.652 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:21.912 true 00:14:21.912 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:21.912 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.912 20:27:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.172 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:22.172 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:22.433 true 00:14:22.433 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:22.433 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.433 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.694 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:22.694 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:22.694 true 00:14:22.954 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:22.954 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.954 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.213 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:23.213 20:27:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:23.213 true 00:14:23.213 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:23.213 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.473 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.732 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:23.732 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:23.732 true 00:14:23.732 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:23.732 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.993 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.252 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:24.252 20:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:24.252 true 00:14:24.252 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:24.252 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.511 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.771 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:24.771 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:24.771 true 00:14:24.771 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:24.772 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.031 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.031 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:25.031 20:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:25.291 true 00:14:25.291 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:25.291 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.551 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.551 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:25.551 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:25.810 true 00:14:25.810 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:25.810 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.070 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.070 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:26.070 20:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:26.329 true 00:14:26.329 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:26.329 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.589 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.589 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:14:26.589 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:26.849 Initializing NVMe Controllers 00:14:26.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.849 Controller IO queue size 128, less than required. 00:14:26.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:26.849 Initialization complete. Launching workers. 00:14:26.849 ======================================================== 00:14:26.849 Latency(us) 00:14:26.849 Device Information : IOPS MiB/s Average min max 00:14:26.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31960.51 15.61 4004.85 1633.25 10058.02 00:14:26.849 ======================================================== 00:14:26.849 Total : 31960.51 15.61 4004.85 1633.25 10058.02 00:14:26.849 00:14:26.849 true 00:14:26.849 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2950468 00:14:26.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2950468) - No such process 00:14:26.849 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2950468 00:14:26.849 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.109 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.109 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:27.110 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:27.110 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:27.110 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.110 20:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:27.370 null0 00:14:27.370 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.370 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.370 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:27.370 null1 00:14:27.370 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.370 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.370 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:27.631 null2 00:14:27.631 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.631 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.631 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:27.890 null3 00:14:27.890 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.890 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.890 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:27.890 null4 00:14:27.890 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.890 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.890 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:28.150 null5 00:14:28.150 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.150 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.150 20:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:28.410 null6 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:28.410 null7 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2956997 2956998 2957000 2957003 2957004 2957006 2957008 2957010 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.410 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.671 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.932 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.933 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.195 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.461 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.790 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.067 20:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.329 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.590 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.590 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.591 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.851 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.112 20:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.112 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.373 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.634 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.896 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.158 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:32.159 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.159 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:32.159 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.159 20:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.159 rmmod nvme_tcp 00:14:32.159 rmmod nvme_fabrics 00:14:32.159 rmmod nvme_keyring 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2949979 ']' 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2949979 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2949979 ']' 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2949979 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2949979 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2949979' 00:14:32.159 killing process with pid 2949979 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2949979 00:14:32.159 [2024-05-13 20:27:48.086177] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:32.159 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2949979 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.421 20:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.337 20:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:34.337 00:14:34.337 real 0m48.865s 00:14:34.337 user 3m15.283s 00:14:34.337 sys 0m17.106s 00:14:34.337 20:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:34.337 20:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.337 ************************************ 00:14:34.337 END TEST nvmf_ns_hotplug_stress 00:14:34.337 ************************************ 00:14:34.598 20:27:50 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.598 20:27:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:34.598 20:27:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:34.598 20:27:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.598 ************************************ 00:14:34.598 START TEST nvmf_connect_stress 00:14:34.598 ************************************ 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.598 * Looking for test storage... 00:14:34.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.598 20:27:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:34.599 20:27:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:42.746 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:42.746 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.746 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:42.747 Found net devices under 0000:31:00.0: cvl_0_0 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:42.747 Found net devices under 0000:31:00.1: cvl_0_1 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:14:42.747 00:14:42.747 --- 10.0.0.2 ping statistics --- 00:14:42.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.747 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:14:42.747 00:14:42.747 --- 10.0.0.1 ping statistics --- 00:14:42.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.747 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2962512 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2962512 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2962512 ']' 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.747 20:27:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:42.747 [2024-05-13 20:27:58.554387] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:42.747 [2024-05-13 20:27:58.554437] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.747 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.747 [2024-05-13 20:27:58.646369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:43.008 [2024-05-13 20:27:58.739139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.008 [2024-05-13 20:27:58.739203] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.008 [2024-05-13 20:27:58.739211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.008 [2024-05-13 20:27:58.739218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.008 [2024-05-13 20:27:58.739224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.008 [2024-05-13 20:27:58.739362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.008 [2024-05-13 20:27:58.739613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.008 [2024-05-13 20:27:58.739617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.576 [2024-05-13 20:27:59.368687] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.576 [2024-05-13 20:27:59.384877] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:43.576 [2024-05-13 20:27:59.399454] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.576 NULL1 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2962698 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.576 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.148 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.148 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:44.148 20:27:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.148 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.148 20:27:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.408 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.408 20:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:44.408 20:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.408 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.408 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.670 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.670 20:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:44.670 20:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.670 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.670 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.932 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.932 20:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:44.932 20:28:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.932 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.932 20:28:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.499 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.499 20:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:45.499 20:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.499 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.499 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.759 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.759 20:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:45.759 20:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.759 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.759 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.020 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.020 20:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:46.020 20:28:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.020 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.020 20:28:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.282 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.282 20:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:46.282 20:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.282 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.282 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.544 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.544 20:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:46.544 20:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.544 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.544 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.114 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.114 20:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:47.114 20:28:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.114 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.114 20:28:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.375 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.375 20:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:47.375 20:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.375 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.375 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.635 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.635 20:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:47.635 20:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.635 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.635 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.902 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.902 20:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:47.902 20:28:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.902 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.902 20:28:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.163 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.163 20:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:48.163 20:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.163 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.163 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.738 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.738 20:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:48.738 20:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.738 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.738 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.998 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.998 20:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:48.998 20:28:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.998 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.998 20:28:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.257 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.257 20:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:49.257 20:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.257 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.257 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.516 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.516 20:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:49.516 20:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.516 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.516 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.777 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.777 20:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:49.777 20:28:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.777 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.777 20:28:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.350 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.350 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:50.350 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.350 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.350 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.611 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.611 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:50.611 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.611 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.611 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.872 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.872 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:50.872 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.872 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.872 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.131 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.131 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:51.131 20:28:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.131 20:28:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.131 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.390 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.390 20:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:51.390 20:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.390 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.390 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.972 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.972 20:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:51.972 20:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.972 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.972 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.233 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.233 20:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:52.233 20:28:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.233 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.233 20:28:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.493 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.493 20:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:52.493 20:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.493 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.493 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.752 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.752 20:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:52.752 20:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.752 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.752 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.010 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.010 20:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:53.010 20:28:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.010 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.010 20:28:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.578 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.578 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:53.578 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.578 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.578 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.839 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2962698 00:14:53.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2962698) - No such process 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2962698 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.839 rmmod nvme_tcp 00:14:53.839 rmmod nvme_fabrics 00:14:53.839 rmmod nvme_keyring 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2962512 ']' 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2962512 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2962512 ']' 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2962512 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2962512 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2962512' 00:14:53.839 killing process with pid 2962512 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2962512 00:14:53.839 [2024-05-13 20:28:09.739283] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:53.839 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2962512 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.101 20:28:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.012 20:28:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.012 00:14:56.012 real 0m21.571s 00:14:56.012 user 0m42.267s 00:14:56.012 sys 0m9.244s 00:14:56.012 20:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:56.012 20:28:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.012 ************************************ 00:14:56.012 END TEST nvmf_connect_stress 00:14:56.012 ************************************ 00:14:56.274 20:28:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:56.274 20:28:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:56.274 20:28:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.274 20:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.274 ************************************ 00:14:56.274 START TEST nvmf_fused_ordering 00:14:56.274 ************************************ 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:56.274 * Looking for test storage... 00:14:56.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.274 20:28:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:04.414 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:04.414 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:04.414 Found net devices under 0000:31:00.0: cvl_0_0 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:04.414 Found net devices under 0000:31:00.1: cvl_0_1 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.414 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:15:04.415 00:15:04.415 --- 10.0.0.2 ping statistics --- 00:15:04.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.415 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:15:04.415 20:28:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:15:04.415 00:15:04.415 --- 10.0.0.1 ping statistics --- 00:15:04.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.415 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2969415 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2969415 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2969415 ']' 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:04.415 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.415 [2024-05-13 20:28:20.090443] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:04.415 [2024-05-13 20:28:20.090499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.415 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.415 [2024-05-13 20:28:20.174609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.415 [2024-05-13 20:28:20.262680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.415 [2024-05-13 20:28:20.262737] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.415 [2024-05-13 20:28:20.262745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.415 [2024-05-13 20:28:20.262752] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.415 [2024-05-13 20:28:20.262758] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.415 [2024-05-13 20:28:20.262783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.990 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.991 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:15:04.991 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.991 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.991 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 [2024-05-13 20:28:20.962772] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 [2024-05-13 20:28:20.986774] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:05.275 [2024-05-13 20:28:20.987015] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.275 20:28:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 NULL1 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.275 20:28:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:05.275 [2024-05-13 20:28:21.055987] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:05.275 [2024-05-13 20:28:21.056034] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969595 ] 00:15:05.275 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.848 Attached to nqn.2016-06.io.spdk:cnode1 00:15:05.848 Namespace ID: 1 size: 1GB 00:15:05.848 fused_ordering(0) 00:15:05.848 fused_ordering(1) 00:15:05.848 fused_ordering(2) 00:15:05.848 fused_ordering(3) 00:15:05.849 fused_ordering(4) 00:15:05.849 fused_ordering(5) 00:15:05.849 fused_ordering(6) 00:15:05.849 fused_ordering(7) 00:15:05.849 fused_ordering(8) 00:15:05.849 fused_ordering(9) 00:15:05.849 fused_ordering(10) 00:15:05.849 fused_ordering(11) 00:15:05.849 fused_ordering(12) 00:15:05.849 fused_ordering(13) 00:15:05.849 fused_ordering(14) 00:15:05.849 fused_ordering(15) 00:15:05.849 fused_ordering(16) 00:15:05.849 fused_ordering(17) 00:15:05.849 fused_ordering(18) 00:15:05.849 fused_ordering(19) 00:15:05.849 fused_ordering(20) 00:15:05.849 fused_ordering(21) 00:15:05.849 fused_ordering(22) 00:15:05.849 fused_ordering(23) 00:15:05.849 fused_ordering(24) 00:15:05.849 fused_ordering(25) 00:15:05.849 fused_ordering(26) 00:15:05.849 fused_ordering(27) 00:15:05.849 fused_ordering(28) 00:15:05.849 fused_ordering(29) 00:15:05.849 fused_ordering(30) 00:15:05.849 fused_ordering(31) 00:15:05.849 fused_ordering(32) 00:15:05.849 fused_ordering(33) 00:15:05.849 fused_ordering(34) 00:15:05.849 fused_ordering(35) 00:15:05.849 fused_ordering(36) 00:15:05.849 fused_ordering(37) 00:15:05.849 fused_ordering(38) 00:15:05.849 fused_ordering(39) 00:15:05.849 fused_ordering(40) 00:15:05.849 fused_ordering(41) 00:15:05.849 fused_ordering(42) 00:15:05.849 fused_ordering(43) 00:15:05.849 fused_ordering(44) 00:15:05.849 fused_ordering(45) 00:15:05.849 fused_ordering(46) 00:15:05.849 fused_ordering(47) 00:15:05.849 fused_ordering(48) 00:15:05.849 fused_ordering(49) 00:15:05.849 fused_ordering(50) 00:15:05.849 fused_ordering(51) 00:15:05.849 fused_ordering(52) 00:15:05.849 fused_ordering(53) 00:15:05.849 fused_ordering(54) 00:15:05.849 fused_ordering(55) 00:15:05.849 fused_ordering(56) 00:15:05.849 fused_ordering(57) 00:15:05.849 fused_ordering(58) 00:15:05.849 fused_ordering(59) 00:15:05.849 fused_ordering(60) 00:15:05.849 fused_ordering(61) 00:15:05.849 fused_ordering(62) 00:15:05.849 fused_ordering(63) 00:15:05.849 fused_ordering(64) 00:15:05.849 fused_ordering(65) 00:15:05.849 fused_ordering(66) 00:15:05.849 fused_ordering(67) 00:15:05.849 fused_ordering(68) 00:15:05.849 fused_ordering(69) 00:15:05.849 fused_ordering(70) 00:15:05.849 fused_ordering(71) 00:15:05.849 fused_ordering(72) 00:15:05.849 fused_ordering(73) 00:15:05.849 fused_ordering(74) 00:15:05.849 fused_ordering(75) 00:15:05.849 fused_ordering(76) 00:15:05.849 fused_ordering(77) 00:15:05.849 fused_ordering(78) 00:15:05.849 fused_ordering(79) 00:15:05.849 fused_ordering(80) 00:15:05.849 fused_ordering(81) 00:15:05.849 fused_ordering(82) 00:15:05.849 fused_ordering(83) 00:15:05.849 fused_ordering(84) 00:15:05.849 fused_ordering(85) 00:15:05.849 fused_ordering(86) 00:15:05.849 fused_ordering(87) 00:15:05.849 fused_ordering(88) 00:15:05.849 fused_ordering(89) 00:15:05.849 fused_ordering(90) 00:15:05.849 fused_ordering(91) 00:15:05.849 fused_ordering(92) 00:15:05.849 fused_ordering(93) 00:15:05.849 fused_ordering(94) 00:15:05.849 fused_ordering(95) 00:15:05.849 fused_ordering(96) 00:15:05.849 fused_ordering(97) 00:15:05.849 fused_ordering(98) 00:15:05.849 fused_ordering(99) 00:15:05.849 fused_ordering(100) 00:15:05.849 fused_ordering(101) 00:15:05.849 fused_ordering(102) 00:15:05.849 fused_ordering(103) 00:15:05.849 fused_ordering(104) 00:15:05.849 fused_ordering(105) 00:15:05.849 fused_ordering(106) 00:15:05.849 fused_ordering(107) 00:15:05.849 fused_ordering(108) 00:15:05.849 fused_ordering(109) 00:15:05.849 fused_ordering(110) 00:15:05.849 fused_ordering(111) 00:15:05.849 fused_ordering(112) 00:15:05.849 fused_ordering(113) 00:15:05.849 fused_ordering(114) 00:15:05.849 fused_ordering(115) 00:15:05.849 fused_ordering(116) 00:15:05.849 fused_ordering(117) 00:15:05.849 fused_ordering(118) 00:15:05.849 fused_ordering(119) 00:15:05.849 fused_ordering(120) 00:15:05.849 fused_ordering(121) 00:15:05.849 fused_ordering(122) 00:15:05.849 fused_ordering(123) 00:15:05.849 fused_ordering(124) 00:15:05.849 fused_ordering(125) 00:15:05.849 fused_ordering(126) 00:15:05.849 fused_ordering(127) 00:15:05.849 fused_ordering(128) 00:15:05.849 fused_ordering(129) 00:15:05.849 fused_ordering(130) 00:15:05.849 fused_ordering(131) 00:15:05.849 fused_ordering(132) 00:15:05.849 fused_ordering(133) 00:15:05.849 fused_ordering(134) 00:15:05.849 fused_ordering(135) 00:15:05.849 fused_ordering(136) 00:15:05.849 fused_ordering(137) 00:15:05.849 fused_ordering(138) 00:15:05.849 fused_ordering(139) 00:15:05.849 fused_ordering(140) 00:15:05.849 fused_ordering(141) 00:15:05.849 fused_ordering(142) 00:15:05.849 fused_ordering(143) 00:15:05.849 fused_ordering(144) 00:15:05.849 fused_ordering(145) 00:15:05.849 fused_ordering(146) 00:15:05.849 fused_ordering(147) 00:15:05.849 fused_ordering(148) 00:15:05.849 fused_ordering(149) 00:15:05.849 fused_ordering(150) 00:15:05.849 fused_ordering(151) 00:15:05.849 fused_ordering(152) 00:15:05.849 fused_ordering(153) 00:15:05.849 fused_ordering(154) 00:15:05.849 fused_ordering(155) 00:15:05.849 fused_ordering(156) 00:15:05.849 fused_ordering(157) 00:15:05.849 fused_ordering(158) 00:15:05.849 fused_ordering(159) 00:15:05.849 fused_ordering(160) 00:15:05.849 fused_ordering(161) 00:15:05.849 fused_ordering(162) 00:15:05.849 fused_ordering(163) 00:15:05.849 fused_ordering(164) 00:15:05.849 fused_ordering(165) 00:15:05.849 fused_ordering(166) 00:15:05.849 fused_ordering(167) 00:15:05.849 fused_ordering(168) 00:15:05.849 fused_ordering(169) 00:15:05.849 fused_ordering(170) 00:15:05.849 fused_ordering(171) 00:15:05.849 fused_ordering(172) 00:15:05.849 fused_ordering(173) 00:15:05.849 fused_ordering(174) 00:15:05.849 fused_ordering(175) 00:15:05.849 fused_ordering(176) 00:15:05.849 fused_ordering(177) 00:15:05.849 fused_ordering(178) 00:15:05.849 fused_ordering(179) 00:15:05.849 fused_ordering(180) 00:15:05.849 fused_ordering(181) 00:15:05.849 fused_ordering(182) 00:15:05.849 fused_ordering(183) 00:15:05.849 fused_ordering(184) 00:15:05.849 fused_ordering(185) 00:15:05.849 fused_ordering(186) 00:15:05.849 fused_ordering(187) 00:15:05.849 fused_ordering(188) 00:15:05.849 fused_ordering(189) 00:15:05.849 fused_ordering(190) 00:15:05.849 fused_ordering(191) 00:15:05.849 fused_ordering(192) 00:15:05.849 fused_ordering(193) 00:15:05.849 fused_ordering(194) 00:15:05.849 fused_ordering(195) 00:15:05.849 fused_ordering(196) 00:15:05.849 fused_ordering(197) 00:15:05.849 fused_ordering(198) 00:15:05.849 fused_ordering(199) 00:15:05.849 fused_ordering(200) 00:15:05.849 fused_ordering(201) 00:15:05.849 fused_ordering(202) 00:15:05.849 fused_ordering(203) 00:15:05.849 fused_ordering(204) 00:15:05.849 fused_ordering(205) 00:15:06.110 fused_ordering(206) 00:15:06.110 fused_ordering(207) 00:15:06.110 fused_ordering(208) 00:15:06.110 fused_ordering(209) 00:15:06.111 fused_ordering(210) 00:15:06.111 fused_ordering(211) 00:15:06.111 fused_ordering(212) 00:15:06.111 fused_ordering(213) 00:15:06.111 fused_ordering(214) 00:15:06.111 fused_ordering(215) 00:15:06.111 fused_ordering(216) 00:15:06.111 fused_ordering(217) 00:15:06.111 fused_ordering(218) 00:15:06.111 fused_ordering(219) 00:15:06.111 fused_ordering(220) 00:15:06.111 fused_ordering(221) 00:15:06.111 fused_ordering(222) 00:15:06.111 fused_ordering(223) 00:15:06.111 fused_ordering(224) 00:15:06.111 fused_ordering(225) 00:15:06.111 fused_ordering(226) 00:15:06.111 fused_ordering(227) 00:15:06.111 fused_ordering(228) 00:15:06.111 fused_ordering(229) 00:15:06.111 fused_ordering(230) 00:15:06.111 fused_ordering(231) 00:15:06.111 fused_ordering(232) 00:15:06.111 fused_ordering(233) 00:15:06.111 fused_ordering(234) 00:15:06.111 fused_ordering(235) 00:15:06.111 fused_ordering(236) 00:15:06.111 fused_ordering(237) 00:15:06.111 fused_ordering(238) 00:15:06.111 fused_ordering(239) 00:15:06.111 fused_ordering(240) 00:15:06.111 fused_ordering(241) 00:15:06.111 fused_ordering(242) 00:15:06.111 fused_ordering(243) 00:15:06.111 fused_ordering(244) 00:15:06.111 fused_ordering(245) 00:15:06.111 fused_ordering(246) 00:15:06.111 fused_ordering(247) 00:15:06.111 fused_ordering(248) 00:15:06.111 fused_ordering(249) 00:15:06.111 fused_ordering(250) 00:15:06.111 fused_ordering(251) 00:15:06.111 fused_ordering(252) 00:15:06.111 fused_ordering(253) 00:15:06.111 fused_ordering(254) 00:15:06.111 fused_ordering(255) 00:15:06.111 fused_ordering(256) 00:15:06.111 fused_ordering(257) 00:15:06.111 fused_ordering(258) 00:15:06.111 fused_ordering(259) 00:15:06.111 fused_ordering(260) 00:15:06.111 fused_ordering(261) 00:15:06.111 fused_ordering(262) 00:15:06.111 fused_ordering(263) 00:15:06.111 fused_ordering(264) 00:15:06.111 fused_ordering(265) 00:15:06.111 fused_ordering(266) 00:15:06.111 fused_ordering(267) 00:15:06.111 fused_ordering(268) 00:15:06.111 fused_ordering(269) 00:15:06.111 fused_ordering(270) 00:15:06.111 fused_ordering(271) 00:15:06.111 fused_ordering(272) 00:15:06.111 fused_ordering(273) 00:15:06.111 fused_ordering(274) 00:15:06.111 fused_ordering(275) 00:15:06.111 fused_ordering(276) 00:15:06.111 fused_ordering(277) 00:15:06.111 fused_ordering(278) 00:15:06.111 fused_ordering(279) 00:15:06.111 fused_ordering(280) 00:15:06.111 fused_ordering(281) 00:15:06.111 fused_ordering(282) 00:15:06.111 fused_ordering(283) 00:15:06.111 fused_ordering(284) 00:15:06.111 fused_ordering(285) 00:15:06.111 fused_ordering(286) 00:15:06.111 fused_ordering(287) 00:15:06.111 fused_ordering(288) 00:15:06.111 fused_ordering(289) 00:15:06.111 fused_ordering(290) 00:15:06.111 fused_ordering(291) 00:15:06.111 fused_ordering(292) 00:15:06.111 fused_ordering(293) 00:15:06.111 fused_ordering(294) 00:15:06.111 fused_ordering(295) 00:15:06.111 fused_ordering(296) 00:15:06.111 fused_ordering(297) 00:15:06.111 fused_ordering(298) 00:15:06.111 fused_ordering(299) 00:15:06.111 fused_ordering(300) 00:15:06.111 fused_ordering(301) 00:15:06.111 fused_ordering(302) 00:15:06.111 fused_ordering(303) 00:15:06.111 fused_ordering(304) 00:15:06.111 fused_ordering(305) 00:15:06.111 fused_ordering(306) 00:15:06.111 fused_ordering(307) 00:15:06.111 fused_ordering(308) 00:15:06.111 fused_ordering(309) 00:15:06.111 fused_ordering(310) 00:15:06.111 fused_ordering(311) 00:15:06.111 fused_ordering(312) 00:15:06.111 fused_ordering(313) 00:15:06.111 fused_ordering(314) 00:15:06.111 fused_ordering(315) 00:15:06.111 fused_ordering(316) 00:15:06.111 fused_ordering(317) 00:15:06.111 fused_ordering(318) 00:15:06.111 fused_ordering(319) 00:15:06.111 fused_ordering(320) 00:15:06.111 fused_ordering(321) 00:15:06.111 fused_ordering(322) 00:15:06.111 fused_ordering(323) 00:15:06.111 fused_ordering(324) 00:15:06.111 fused_ordering(325) 00:15:06.111 fused_ordering(326) 00:15:06.111 fused_ordering(327) 00:15:06.111 fused_ordering(328) 00:15:06.111 fused_ordering(329) 00:15:06.111 fused_ordering(330) 00:15:06.111 fused_ordering(331) 00:15:06.111 fused_ordering(332) 00:15:06.111 fused_ordering(333) 00:15:06.111 fused_ordering(334) 00:15:06.111 fused_ordering(335) 00:15:06.111 fused_ordering(336) 00:15:06.111 fused_ordering(337) 00:15:06.111 fused_ordering(338) 00:15:06.111 fused_ordering(339) 00:15:06.111 fused_ordering(340) 00:15:06.111 fused_ordering(341) 00:15:06.111 fused_ordering(342) 00:15:06.111 fused_ordering(343) 00:15:06.111 fused_ordering(344) 00:15:06.111 fused_ordering(345) 00:15:06.111 fused_ordering(346) 00:15:06.111 fused_ordering(347) 00:15:06.111 fused_ordering(348) 00:15:06.111 fused_ordering(349) 00:15:06.111 fused_ordering(350) 00:15:06.111 fused_ordering(351) 00:15:06.111 fused_ordering(352) 00:15:06.111 fused_ordering(353) 00:15:06.111 fused_ordering(354) 00:15:06.111 fused_ordering(355) 00:15:06.111 fused_ordering(356) 00:15:06.111 fused_ordering(357) 00:15:06.111 fused_ordering(358) 00:15:06.111 fused_ordering(359) 00:15:06.111 fused_ordering(360) 00:15:06.111 fused_ordering(361) 00:15:06.111 fused_ordering(362) 00:15:06.111 fused_ordering(363) 00:15:06.111 fused_ordering(364) 00:15:06.111 fused_ordering(365) 00:15:06.111 fused_ordering(366) 00:15:06.111 fused_ordering(367) 00:15:06.111 fused_ordering(368) 00:15:06.111 fused_ordering(369) 00:15:06.111 fused_ordering(370) 00:15:06.111 fused_ordering(371) 00:15:06.111 fused_ordering(372) 00:15:06.111 fused_ordering(373) 00:15:06.111 fused_ordering(374) 00:15:06.111 fused_ordering(375) 00:15:06.111 fused_ordering(376) 00:15:06.111 fused_ordering(377) 00:15:06.111 fused_ordering(378) 00:15:06.111 fused_ordering(379) 00:15:06.111 fused_ordering(380) 00:15:06.111 fused_ordering(381) 00:15:06.111 fused_ordering(382) 00:15:06.111 fused_ordering(383) 00:15:06.111 fused_ordering(384) 00:15:06.111 fused_ordering(385) 00:15:06.111 fused_ordering(386) 00:15:06.111 fused_ordering(387) 00:15:06.111 fused_ordering(388) 00:15:06.111 fused_ordering(389) 00:15:06.111 fused_ordering(390) 00:15:06.111 fused_ordering(391) 00:15:06.111 fused_ordering(392) 00:15:06.111 fused_ordering(393) 00:15:06.111 fused_ordering(394) 00:15:06.111 fused_ordering(395) 00:15:06.111 fused_ordering(396) 00:15:06.111 fused_ordering(397) 00:15:06.111 fused_ordering(398) 00:15:06.111 fused_ordering(399) 00:15:06.111 fused_ordering(400) 00:15:06.111 fused_ordering(401) 00:15:06.111 fused_ordering(402) 00:15:06.111 fused_ordering(403) 00:15:06.111 fused_ordering(404) 00:15:06.111 fused_ordering(405) 00:15:06.111 fused_ordering(406) 00:15:06.111 fused_ordering(407) 00:15:06.111 fused_ordering(408) 00:15:06.111 fused_ordering(409) 00:15:06.111 fused_ordering(410) 00:15:06.684 fused_ordering(411) 00:15:06.684 fused_ordering(412) 00:15:06.684 fused_ordering(413) 00:15:06.684 fused_ordering(414) 00:15:06.684 fused_ordering(415) 00:15:06.684 fused_ordering(416) 00:15:06.684 fused_ordering(417) 00:15:06.684 fused_ordering(418) 00:15:06.684 fused_ordering(419) 00:15:06.684 fused_ordering(420) 00:15:06.684 fused_ordering(421) 00:15:06.684 fused_ordering(422) 00:15:06.684 fused_ordering(423) 00:15:06.684 fused_ordering(424) 00:15:06.684 fused_ordering(425) 00:15:06.684 fused_ordering(426) 00:15:06.684 fused_ordering(427) 00:15:06.684 fused_ordering(428) 00:15:06.684 fused_ordering(429) 00:15:06.684 fused_ordering(430) 00:15:06.684 fused_ordering(431) 00:15:06.684 fused_ordering(432) 00:15:06.684 fused_ordering(433) 00:15:06.684 fused_ordering(434) 00:15:06.684 fused_ordering(435) 00:15:06.684 fused_ordering(436) 00:15:06.684 fused_ordering(437) 00:15:06.684 fused_ordering(438) 00:15:06.684 fused_ordering(439) 00:15:06.684 fused_ordering(440) 00:15:06.684 fused_ordering(441) 00:15:06.684 fused_ordering(442) 00:15:06.684 fused_ordering(443) 00:15:06.684 fused_ordering(444) 00:15:06.684 fused_ordering(445) 00:15:06.684 fused_ordering(446) 00:15:06.684 fused_ordering(447) 00:15:06.684 fused_ordering(448) 00:15:06.684 fused_ordering(449) 00:15:06.684 fused_ordering(450) 00:15:06.684 fused_ordering(451) 00:15:06.684 fused_ordering(452) 00:15:06.684 fused_ordering(453) 00:15:06.684 fused_ordering(454) 00:15:06.684 fused_ordering(455) 00:15:06.684 fused_ordering(456) 00:15:06.684 fused_ordering(457) 00:15:06.684 fused_ordering(458) 00:15:06.684 fused_ordering(459) 00:15:06.684 fused_ordering(460) 00:15:06.684 fused_ordering(461) 00:15:06.684 fused_ordering(462) 00:15:06.684 fused_ordering(463) 00:15:06.684 fused_ordering(464) 00:15:06.684 fused_ordering(465) 00:15:06.684 fused_ordering(466) 00:15:06.684 fused_ordering(467) 00:15:06.684 fused_ordering(468) 00:15:06.684 fused_ordering(469) 00:15:06.684 fused_ordering(470) 00:15:06.684 fused_ordering(471) 00:15:06.684 fused_ordering(472) 00:15:06.684 fused_ordering(473) 00:15:06.684 fused_ordering(474) 00:15:06.684 fused_ordering(475) 00:15:06.684 fused_ordering(476) 00:15:06.684 fused_ordering(477) 00:15:06.684 fused_ordering(478) 00:15:06.684 fused_ordering(479) 00:15:06.684 fused_ordering(480) 00:15:06.684 fused_ordering(481) 00:15:06.684 fused_ordering(482) 00:15:06.684 fused_ordering(483) 00:15:06.684 fused_ordering(484) 00:15:06.684 fused_ordering(485) 00:15:06.684 fused_ordering(486) 00:15:06.684 fused_ordering(487) 00:15:06.684 fused_ordering(488) 00:15:06.684 fused_ordering(489) 00:15:06.684 fused_ordering(490) 00:15:06.684 fused_ordering(491) 00:15:06.684 fused_ordering(492) 00:15:06.684 fused_ordering(493) 00:15:06.684 fused_ordering(494) 00:15:06.684 fused_ordering(495) 00:15:06.684 fused_ordering(496) 00:15:06.684 fused_ordering(497) 00:15:06.684 fused_ordering(498) 00:15:06.684 fused_ordering(499) 00:15:06.684 fused_ordering(500) 00:15:06.684 fused_ordering(501) 00:15:06.684 fused_ordering(502) 00:15:06.684 fused_ordering(503) 00:15:06.684 fused_ordering(504) 00:15:06.684 fused_ordering(505) 00:15:06.684 fused_ordering(506) 00:15:06.684 fused_ordering(507) 00:15:06.684 fused_ordering(508) 00:15:06.684 fused_ordering(509) 00:15:06.684 fused_ordering(510) 00:15:06.684 fused_ordering(511) 00:15:06.684 fused_ordering(512) 00:15:06.684 fused_ordering(513) 00:15:06.684 fused_ordering(514) 00:15:06.684 fused_ordering(515) 00:15:06.684 fused_ordering(516) 00:15:06.684 fused_ordering(517) 00:15:06.684 fused_ordering(518) 00:15:06.684 fused_ordering(519) 00:15:06.684 fused_ordering(520) 00:15:06.684 fused_ordering(521) 00:15:06.684 fused_ordering(522) 00:15:06.684 fused_ordering(523) 00:15:06.684 fused_ordering(524) 00:15:06.684 fused_ordering(525) 00:15:06.684 fused_ordering(526) 00:15:06.684 fused_ordering(527) 00:15:06.684 fused_ordering(528) 00:15:06.684 fused_ordering(529) 00:15:06.684 fused_ordering(530) 00:15:06.684 fused_ordering(531) 00:15:06.684 fused_ordering(532) 00:15:06.684 fused_ordering(533) 00:15:06.684 fused_ordering(534) 00:15:06.684 fused_ordering(535) 00:15:06.684 fused_ordering(536) 00:15:06.684 fused_ordering(537) 00:15:06.684 fused_ordering(538) 00:15:06.684 fused_ordering(539) 00:15:06.684 fused_ordering(540) 00:15:06.684 fused_ordering(541) 00:15:06.684 fused_ordering(542) 00:15:06.684 fused_ordering(543) 00:15:06.684 fused_ordering(544) 00:15:06.684 fused_ordering(545) 00:15:06.684 fused_ordering(546) 00:15:06.684 fused_ordering(547) 00:15:06.684 fused_ordering(548) 00:15:06.684 fused_ordering(549) 00:15:06.684 fused_ordering(550) 00:15:06.684 fused_ordering(551) 00:15:06.684 fused_ordering(552) 00:15:06.684 fused_ordering(553) 00:15:06.684 fused_ordering(554) 00:15:06.684 fused_ordering(555) 00:15:06.684 fused_ordering(556) 00:15:06.684 fused_ordering(557) 00:15:06.684 fused_ordering(558) 00:15:06.684 fused_ordering(559) 00:15:06.684 fused_ordering(560) 00:15:06.684 fused_ordering(561) 00:15:06.684 fused_ordering(562) 00:15:06.684 fused_ordering(563) 00:15:06.684 fused_ordering(564) 00:15:06.684 fused_ordering(565) 00:15:06.684 fused_ordering(566) 00:15:06.684 fused_ordering(567) 00:15:06.684 fused_ordering(568) 00:15:06.684 fused_ordering(569) 00:15:06.685 fused_ordering(570) 00:15:06.685 fused_ordering(571) 00:15:06.685 fused_ordering(572) 00:15:06.685 fused_ordering(573) 00:15:06.685 fused_ordering(574) 00:15:06.685 fused_ordering(575) 00:15:06.685 fused_ordering(576) 00:15:06.685 fused_ordering(577) 00:15:06.685 fused_ordering(578) 00:15:06.685 fused_ordering(579) 00:15:06.685 fused_ordering(580) 00:15:06.685 fused_ordering(581) 00:15:06.685 fused_ordering(582) 00:15:06.685 fused_ordering(583) 00:15:06.685 fused_ordering(584) 00:15:06.685 fused_ordering(585) 00:15:06.685 fused_ordering(586) 00:15:06.685 fused_ordering(587) 00:15:06.685 fused_ordering(588) 00:15:06.685 fused_ordering(589) 00:15:06.685 fused_ordering(590) 00:15:06.685 fused_ordering(591) 00:15:06.685 fused_ordering(592) 00:15:06.685 fused_ordering(593) 00:15:06.685 fused_ordering(594) 00:15:06.685 fused_ordering(595) 00:15:06.685 fused_ordering(596) 00:15:06.685 fused_ordering(597) 00:15:06.685 fused_ordering(598) 00:15:06.685 fused_ordering(599) 00:15:06.685 fused_ordering(600) 00:15:06.685 fused_ordering(601) 00:15:06.685 fused_ordering(602) 00:15:06.685 fused_ordering(603) 00:15:06.685 fused_ordering(604) 00:15:06.685 fused_ordering(605) 00:15:06.685 fused_ordering(606) 00:15:06.685 fused_ordering(607) 00:15:06.685 fused_ordering(608) 00:15:06.685 fused_ordering(609) 00:15:06.685 fused_ordering(610) 00:15:06.685 fused_ordering(611) 00:15:06.685 fused_ordering(612) 00:15:06.685 fused_ordering(613) 00:15:06.685 fused_ordering(614) 00:15:06.685 fused_ordering(615) 00:15:07.255 fused_ordering(616) 00:15:07.255 fused_ordering(617) 00:15:07.255 fused_ordering(618) 00:15:07.255 fused_ordering(619) 00:15:07.255 fused_ordering(620) 00:15:07.255 fused_ordering(621) 00:15:07.255 fused_ordering(622) 00:15:07.255 fused_ordering(623) 00:15:07.255 fused_ordering(624) 00:15:07.255 fused_ordering(625) 00:15:07.255 fused_ordering(626) 00:15:07.255 fused_ordering(627) 00:15:07.255 fused_ordering(628) 00:15:07.255 fused_ordering(629) 00:15:07.255 fused_ordering(630) 00:15:07.255 fused_ordering(631) 00:15:07.255 fused_ordering(632) 00:15:07.255 fused_ordering(633) 00:15:07.255 fused_ordering(634) 00:15:07.255 fused_ordering(635) 00:15:07.255 fused_ordering(636) 00:15:07.255 fused_ordering(637) 00:15:07.255 fused_ordering(638) 00:15:07.255 fused_ordering(639) 00:15:07.255 fused_ordering(640) 00:15:07.255 fused_ordering(641) 00:15:07.255 fused_ordering(642) 00:15:07.255 fused_ordering(643) 00:15:07.255 fused_ordering(644) 00:15:07.255 fused_ordering(645) 00:15:07.255 fused_ordering(646) 00:15:07.255 fused_ordering(647) 00:15:07.255 fused_ordering(648) 00:15:07.255 fused_ordering(649) 00:15:07.255 fused_ordering(650) 00:15:07.255 fused_ordering(651) 00:15:07.255 fused_ordering(652) 00:15:07.255 fused_ordering(653) 00:15:07.255 fused_ordering(654) 00:15:07.255 fused_ordering(655) 00:15:07.255 fused_ordering(656) 00:15:07.255 fused_ordering(657) 00:15:07.255 fused_ordering(658) 00:15:07.255 fused_ordering(659) 00:15:07.255 fused_ordering(660) 00:15:07.255 fused_ordering(661) 00:15:07.255 fused_ordering(662) 00:15:07.255 fused_ordering(663) 00:15:07.255 fused_ordering(664) 00:15:07.255 fused_ordering(665) 00:15:07.255 fused_ordering(666) 00:15:07.255 fused_ordering(667) 00:15:07.255 fused_ordering(668) 00:15:07.255 fused_ordering(669) 00:15:07.255 fused_ordering(670) 00:15:07.255 fused_ordering(671) 00:15:07.255 fused_ordering(672) 00:15:07.255 fused_ordering(673) 00:15:07.255 fused_ordering(674) 00:15:07.255 fused_ordering(675) 00:15:07.255 fused_ordering(676) 00:15:07.255 fused_ordering(677) 00:15:07.255 fused_ordering(678) 00:15:07.255 fused_ordering(679) 00:15:07.255 fused_ordering(680) 00:15:07.255 fused_ordering(681) 00:15:07.255 fused_ordering(682) 00:15:07.255 fused_ordering(683) 00:15:07.255 fused_ordering(684) 00:15:07.255 fused_ordering(685) 00:15:07.255 fused_ordering(686) 00:15:07.255 fused_ordering(687) 00:15:07.255 fused_ordering(688) 00:15:07.255 fused_ordering(689) 00:15:07.255 fused_ordering(690) 00:15:07.255 fused_ordering(691) 00:15:07.255 fused_ordering(692) 00:15:07.255 fused_ordering(693) 00:15:07.255 fused_ordering(694) 00:15:07.255 fused_ordering(695) 00:15:07.255 fused_ordering(696) 00:15:07.255 fused_ordering(697) 00:15:07.255 fused_ordering(698) 00:15:07.255 fused_ordering(699) 00:15:07.255 fused_ordering(700) 00:15:07.255 fused_ordering(701) 00:15:07.255 fused_ordering(702) 00:15:07.255 fused_ordering(703) 00:15:07.255 fused_ordering(704) 00:15:07.255 fused_ordering(705) 00:15:07.255 fused_ordering(706) 00:15:07.255 fused_ordering(707) 00:15:07.255 fused_ordering(708) 00:15:07.255 fused_ordering(709) 00:15:07.255 fused_ordering(710) 00:15:07.255 fused_ordering(711) 00:15:07.255 fused_ordering(712) 00:15:07.255 fused_ordering(713) 00:15:07.255 fused_ordering(714) 00:15:07.255 fused_ordering(715) 00:15:07.255 fused_ordering(716) 00:15:07.255 fused_ordering(717) 00:15:07.255 fused_ordering(718) 00:15:07.255 fused_ordering(719) 00:15:07.255 fused_ordering(720) 00:15:07.255 fused_ordering(721) 00:15:07.255 fused_ordering(722) 00:15:07.255 fused_ordering(723) 00:15:07.255 fused_ordering(724) 00:15:07.255 fused_ordering(725) 00:15:07.255 fused_ordering(726) 00:15:07.255 fused_ordering(727) 00:15:07.255 fused_ordering(728) 00:15:07.255 fused_ordering(729) 00:15:07.255 fused_ordering(730) 00:15:07.255 fused_ordering(731) 00:15:07.255 fused_ordering(732) 00:15:07.255 fused_ordering(733) 00:15:07.255 fused_ordering(734) 00:15:07.255 fused_ordering(735) 00:15:07.255 fused_ordering(736) 00:15:07.255 fused_ordering(737) 00:15:07.255 fused_ordering(738) 00:15:07.255 fused_ordering(739) 00:15:07.255 fused_ordering(740) 00:15:07.255 fused_ordering(741) 00:15:07.255 fused_ordering(742) 00:15:07.255 fused_ordering(743) 00:15:07.255 fused_ordering(744) 00:15:07.255 fused_ordering(745) 00:15:07.255 fused_ordering(746) 00:15:07.255 fused_ordering(747) 00:15:07.255 fused_ordering(748) 00:15:07.255 fused_ordering(749) 00:15:07.255 fused_ordering(750) 00:15:07.255 fused_ordering(751) 00:15:07.255 fused_ordering(752) 00:15:07.255 fused_ordering(753) 00:15:07.255 fused_ordering(754) 00:15:07.255 fused_ordering(755) 00:15:07.255 fused_ordering(756) 00:15:07.255 fused_ordering(757) 00:15:07.255 fused_ordering(758) 00:15:07.255 fused_ordering(759) 00:15:07.255 fused_ordering(760) 00:15:07.255 fused_ordering(761) 00:15:07.256 fused_ordering(762) 00:15:07.256 fused_ordering(763) 00:15:07.256 fused_ordering(764) 00:15:07.256 fused_ordering(765) 00:15:07.256 fused_ordering(766) 00:15:07.256 fused_ordering(767) 00:15:07.256 fused_ordering(768) 00:15:07.256 fused_ordering(769) 00:15:07.256 fused_ordering(770) 00:15:07.256 fused_ordering(771) 00:15:07.256 fused_ordering(772) 00:15:07.256 fused_ordering(773) 00:15:07.256 fused_ordering(774) 00:15:07.256 fused_ordering(775) 00:15:07.256 fused_ordering(776) 00:15:07.256 fused_ordering(777) 00:15:07.256 fused_ordering(778) 00:15:07.256 fused_ordering(779) 00:15:07.256 fused_ordering(780) 00:15:07.256 fused_ordering(781) 00:15:07.256 fused_ordering(782) 00:15:07.256 fused_ordering(783) 00:15:07.256 fused_ordering(784) 00:15:07.256 fused_ordering(785) 00:15:07.256 fused_ordering(786) 00:15:07.256 fused_ordering(787) 00:15:07.256 fused_ordering(788) 00:15:07.256 fused_ordering(789) 00:15:07.256 fused_ordering(790) 00:15:07.256 fused_ordering(791) 00:15:07.256 fused_ordering(792) 00:15:07.256 fused_ordering(793) 00:15:07.256 fused_ordering(794) 00:15:07.256 fused_ordering(795) 00:15:07.256 fused_ordering(796) 00:15:07.256 fused_ordering(797) 00:15:07.256 fused_ordering(798) 00:15:07.256 fused_ordering(799) 00:15:07.256 fused_ordering(800) 00:15:07.256 fused_ordering(801) 00:15:07.256 fused_ordering(802) 00:15:07.256 fused_ordering(803) 00:15:07.256 fused_ordering(804) 00:15:07.256 fused_ordering(805) 00:15:07.256 fused_ordering(806) 00:15:07.256 fused_ordering(807) 00:15:07.256 fused_ordering(808) 00:15:07.256 fused_ordering(809) 00:15:07.256 fused_ordering(810) 00:15:07.256 fused_ordering(811) 00:15:07.256 fused_ordering(812) 00:15:07.256 fused_ordering(813) 00:15:07.256 fused_ordering(814) 00:15:07.256 fused_ordering(815) 00:15:07.256 fused_ordering(816) 00:15:07.256 fused_ordering(817) 00:15:07.256 fused_ordering(818) 00:15:07.256 fused_ordering(819) 00:15:07.256 fused_ordering(820) 00:15:07.827 fused_ordering(821) 00:15:07.827 fused_ordering(822) 00:15:07.827 fused_ordering(823) 00:15:07.827 fused_ordering(824) 00:15:07.827 fused_ordering(825) 00:15:07.827 fused_ordering(826) 00:15:07.827 fused_ordering(827) 00:15:07.827 fused_ordering(828) 00:15:07.827 fused_ordering(829) 00:15:07.827 fused_ordering(830) 00:15:07.827 fused_ordering(831) 00:15:07.827 fused_ordering(832) 00:15:07.827 fused_ordering(833) 00:15:07.827 fused_ordering(834) 00:15:07.827 fused_ordering(835) 00:15:07.827 fused_ordering(836) 00:15:07.827 fused_ordering(837) 00:15:07.827 fused_ordering(838) 00:15:07.827 fused_ordering(839) 00:15:07.827 fused_ordering(840) 00:15:07.827 fused_ordering(841) 00:15:07.827 fused_ordering(842) 00:15:07.827 fused_ordering(843) 00:15:07.827 fused_ordering(844) 00:15:07.827 fused_ordering(845) 00:15:07.827 fused_ordering(846) 00:15:07.827 fused_ordering(847) 00:15:07.827 fused_ordering(848) 00:15:07.827 fused_ordering(849) 00:15:07.827 fused_ordering(850) 00:15:07.827 fused_ordering(851) 00:15:07.827 fused_ordering(852) 00:15:07.827 fused_ordering(853) 00:15:07.827 fused_ordering(854) 00:15:07.827 fused_ordering(855) 00:15:07.827 fused_ordering(856) 00:15:07.827 fused_ordering(857) 00:15:07.827 fused_ordering(858) 00:15:07.827 fused_ordering(859) 00:15:07.827 fused_ordering(860) 00:15:07.827 fused_ordering(861) 00:15:07.827 fused_ordering(862) 00:15:07.827 fused_ordering(863) 00:15:07.827 fused_ordering(864) 00:15:07.827 fused_ordering(865) 00:15:07.827 fused_ordering(866) 00:15:07.827 fused_ordering(867) 00:15:07.827 fused_ordering(868) 00:15:07.827 fused_ordering(869) 00:15:07.827 fused_ordering(870) 00:15:07.827 fused_ordering(871) 00:15:07.827 fused_ordering(872) 00:15:07.827 fused_ordering(873) 00:15:07.827 fused_ordering(874) 00:15:07.827 fused_ordering(875) 00:15:07.827 fused_ordering(876) 00:15:07.827 fused_ordering(877) 00:15:07.827 fused_ordering(878) 00:15:07.827 fused_ordering(879) 00:15:07.827 fused_ordering(880) 00:15:07.827 fused_ordering(881) 00:15:07.827 fused_ordering(882) 00:15:07.827 fused_ordering(883) 00:15:07.827 fused_ordering(884) 00:15:07.827 fused_ordering(885) 00:15:07.827 fused_ordering(886) 00:15:07.827 fused_ordering(887) 00:15:07.827 fused_ordering(888) 00:15:07.827 fused_ordering(889) 00:15:07.827 fused_ordering(890) 00:15:07.827 fused_ordering(891) 00:15:07.827 fused_ordering(892) 00:15:07.827 fused_ordering(893) 00:15:07.827 fused_ordering(894) 00:15:07.827 fused_ordering(895) 00:15:07.827 fused_ordering(896) 00:15:07.827 fused_ordering(897) 00:15:07.827 fused_ordering(898) 00:15:07.827 fused_ordering(899) 00:15:07.827 fused_ordering(900) 00:15:07.827 fused_ordering(901) 00:15:07.827 fused_ordering(902) 00:15:07.827 fused_ordering(903) 00:15:07.827 fused_ordering(904) 00:15:07.827 fused_ordering(905) 00:15:07.827 fused_ordering(906) 00:15:07.827 fused_ordering(907) 00:15:07.827 fused_ordering(908) 00:15:07.827 fused_ordering(909) 00:15:07.827 fused_ordering(910) 00:15:07.827 fused_ordering(911) 00:15:07.827 fused_ordering(912) 00:15:07.827 fused_ordering(913) 00:15:07.827 fused_ordering(914) 00:15:07.827 fused_ordering(915) 00:15:07.827 fused_ordering(916) 00:15:07.827 fused_ordering(917) 00:15:07.827 fused_ordering(918) 00:15:07.827 fused_ordering(919) 00:15:07.827 fused_ordering(920) 00:15:07.827 fused_ordering(921) 00:15:07.827 fused_ordering(922) 00:15:07.827 fused_ordering(923) 00:15:07.827 fused_ordering(924) 00:15:07.827 fused_ordering(925) 00:15:07.827 fused_ordering(926) 00:15:07.827 fused_ordering(927) 00:15:07.827 fused_ordering(928) 00:15:07.827 fused_ordering(929) 00:15:07.827 fused_ordering(930) 00:15:07.827 fused_ordering(931) 00:15:07.827 fused_ordering(932) 00:15:07.827 fused_ordering(933) 00:15:07.827 fused_ordering(934) 00:15:07.827 fused_ordering(935) 00:15:07.827 fused_ordering(936) 00:15:07.827 fused_ordering(937) 00:15:07.827 fused_ordering(938) 00:15:07.827 fused_ordering(939) 00:15:07.827 fused_ordering(940) 00:15:07.827 fused_ordering(941) 00:15:07.827 fused_ordering(942) 00:15:07.827 fused_ordering(943) 00:15:07.827 fused_ordering(944) 00:15:07.828 fused_ordering(945) 00:15:07.828 fused_ordering(946) 00:15:07.828 fused_ordering(947) 00:15:07.828 fused_ordering(948) 00:15:07.828 fused_ordering(949) 00:15:07.828 fused_ordering(950) 00:15:07.828 fused_ordering(951) 00:15:07.828 fused_ordering(952) 00:15:07.828 fused_ordering(953) 00:15:07.828 fused_ordering(954) 00:15:07.828 fused_ordering(955) 00:15:07.828 fused_ordering(956) 00:15:07.828 fused_ordering(957) 00:15:07.828 fused_ordering(958) 00:15:07.828 fused_ordering(959) 00:15:07.828 fused_ordering(960) 00:15:07.828 fused_ordering(961) 00:15:07.828 fused_ordering(962) 00:15:07.828 fused_ordering(963) 00:15:07.828 fused_ordering(964) 00:15:07.828 fused_ordering(965) 00:15:07.828 fused_ordering(966) 00:15:07.828 fused_ordering(967) 00:15:07.828 fused_ordering(968) 00:15:07.828 fused_ordering(969) 00:15:07.828 fused_ordering(970) 00:15:07.828 fused_ordering(971) 00:15:07.828 fused_ordering(972) 00:15:07.828 fused_ordering(973) 00:15:07.828 fused_ordering(974) 00:15:07.828 fused_ordering(975) 00:15:07.828 fused_ordering(976) 00:15:07.828 fused_ordering(977) 00:15:07.828 fused_ordering(978) 00:15:07.828 fused_ordering(979) 00:15:07.828 fused_ordering(980) 00:15:07.828 fused_ordering(981) 00:15:07.828 fused_ordering(982) 00:15:07.828 fused_ordering(983) 00:15:07.828 fused_ordering(984) 00:15:07.828 fused_ordering(985) 00:15:07.828 fused_ordering(986) 00:15:07.828 fused_ordering(987) 00:15:07.828 fused_ordering(988) 00:15:07.828 fused_ordering(989) 00:15:07.828 fused_ordering(990) 00:15:07.828 fused_ordering(991) 00:15:07.828 fused_ordering(992) 00:15:07.828 fused_ordering(993) 00:15:07.828 fused_ordering(994) 00:15:07.828 fused_ordering(995) 00:15:07.828 fused_ordering(996) 00:15:07.828 fused_ordering(997) 00:15:07.828 fused_ordering(998) 00:15:07.828 fused_ordering(999) 00:15:07.828 fused_ordering(1000) 00:15:07.828 fused_ordering(1001) 00:15:07.828 fused_ordering(1002) 00:15:07.828 fused_ordering(1003) 00:15:07.828 fused_ordering(1004) 00:15:07.828 fused_ordering(1005) 00:15:07.828 fused_ordering(1006) 00:15:07.828 fused_ordering(1007) 00:15:07.828 fused_ordering(1008) 00:15:07.828 fused_ordering(1009) 00:15:07.828 fused_ordering(1010) 00:15:07.828 fused_ordering(1011) 00:15:07.828 fused_ordering(1012) 00:15:07.828 fused_ordering(1013) 00:15:07.828 fused_ordering(1014) 00:15:07.828 fused_ordering(1015) 00:15:07.828 fused_ordering(1016) 00:15:07.828 fused_ordering(1017) 00:15:07.828 fused_ordering(1018) 00:15:07.828 fused_ordering(1019) 00:15:07.828 fused_ordering(1020) 00:15:07.828 fused_ordering(1021) 00:15:07.828 fused_ordering(1022) 00:15:07.828 fused_ordering(1023) 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.828 rmmod nvme_tcp 00:15:07.828 rmmod nvme_fabrics 00:15:07.828 rmmod nvme_keyring 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2969415 ']' 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2969415 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2969415 ']' 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2969415 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2969415 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2969415' 00:15:07.828 killing process with pid 2969415 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2969415 00:15:07.828 [2024-05-13 20:28:23.710642] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:07.828 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2969415 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.089 20:28:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.001 20:28:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.001 00:15:10.001 real 0m13.929s 00:15:10.001 user 0m7.452s 00:15:10.001 sys 0m7.378s 00:15:10.001 20:28:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.001 20:28:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:10.001 ************************************ 00:15:10.001 END TEST nvmf_fused_ordering 00:15:10.001 ************************************ 00:15:10.263 20:28:25 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:10.263 20:28:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:10.263 20:28:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.263 20:28:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.263 ************************************ 00:15:10.263 START TEST nvmf_delete_subsystem 00:15:10.263 ************************************ 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:10.263 * Looking for test storage... 00:15:10.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.263 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.264 20:28:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:18.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:18.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:18.398 Found net devices under 0000:31:00.0: cvl_0_0 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:18.398 Found net devices under 0000:31:00.1: cvl_0_1 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.398 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:15:18.659 00:15:18.659 --- 10.0.0.2 ping statistics --- 00:15:18.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.659 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:15:18.659 00:15:18.659 --- 10.0.0.1 ping statistics --- 00:15:18.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.659 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2974937 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2974937 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2974937 ']' 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.659 20:28:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.659 [2024-05-13 20:28:34.496216] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:18.659 [2024-05-13 20:28:34.496262] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.659 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.659 [2024-05-13 20:28:34.568486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:18.920 [2024-05-13 20:28:34.632772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.920 [2024-05-13 20:28:34.632812] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.920 [2024-05-13 20:28:34.632820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.920 [2024-05-13 20:28:34.632826] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.920 [2024-05-13 20:28:34.632832] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.920 [2024-05-13 20:28:34.632969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.920 [2024-05-13 20:28:34.632971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 [2024-05-13 20:28:35.300363] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 [2024-05-13 20:28:35.316334] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:19.491 [2024-05-13 20:28:35.316540] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 NULL1 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 Delay0 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2974978 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:19.491 20:28:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:19.492 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.492 [2024-05-13 20:28:35.401184] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:21.408 20:28:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.408 20:28:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.408 20:28:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 starting I/O failed: -6 00:15:21.669 Write completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.669 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 [2024-05-13 20:28:37.530064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dcc90 is same with the state(5) to be set 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 Write completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 starting I/O failed: -6 00:15:21.670 Read completed with error (sct=0, sc=8) 00:15:21.670 [2024-05-13 20:28:37.531950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce24000c00 is same with the state(5) to be set 00:15:22.614 [2024-05-13 20:28:38.500796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e5250 is same with the state(5) to be set 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 [2024-05-13 20:28:38.533629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c68b0 is same with the state(5) to be set 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 [2024-05-13 20:28:38.533759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e6290 is same with the state(5) to be set 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 [2024-05-13 20:28:38.534134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce2400bfe0 is same with the state(5) to be set 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Write completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.614 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Read completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 Write completed with error (sct=0, sc=8) 00:15:22.615 [2024-05-13 20:28:38.534327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce2400c780 is same with the state(5) to be set 00:15:22.615 Initializing NVMe Controllers 00:15:22.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:22.615 Controller IO queue size 128, less than required. 00:15:22.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:22.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:22.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:22.615 Initialization complete. Launching workers. 00:15:22.615 ======================================================== 00:15:22.615 Latency(us) 00:15:22.615 Device Information : IOPS MiB/s Average min max 00:15:22.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.44 0.09 882805.56 297.08 1011097.34 00:15:22.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.47 0.08 976018.14 367.86 2001983.74 00:15:22.615 ======================================================== 00:15:22.615 Total : 345.91 0.17 928742.22 297.08 2001983.74 00:15:22.615 00:15:22.615 [2024-05-13 20:28:38.534854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e5250 (9): Bad file descriptor 00:15:22.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:22.615 20:28:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.615 20:28:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:22.615 20:28:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2974978 00:15:22.615 20:28:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2974978 00:15:23.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2974978) - No such process 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2974978 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2974978 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2974978 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.188 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:23.189 [2024-05-13 20:28:39.065620] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2975793 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:23.189 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:23.189 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.189 [2024-05-13 20:28:39.132064] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:23.761 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:23.761 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:23.761 20:28:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.341 20:28:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:24.341 20:28:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:24.341 20:28:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.917 20:28:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:24.917 20:28:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:24.917 20:28:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:25.178 20:28:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:25.178 20:28:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:25.178 20:28:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:25.750 20:28:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:25.750 20:28:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:25.750 20:28:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:26.321 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:26.321 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:26.321 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:26.583 Initializing NVMe Controllers 00:15:26.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.583 Controller IO queue size 128, less than required. 00:15:26.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:26.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:26.583 Initialization complete. Launching workers. 00:15:26.583 ======================================================== 00:15:26.583 Latency(us) 00:15:26.583 Device Information : IOPS MiB/s Average min max 00:15:26.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002519.46 1000122.02 1041003.80 00:15:26.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003372.83 1000326.82 1009761.79 00:15:26.583 ======================================================== 00:15:26.583 Total : 256.00 0.12 1002946.15 1000122.02 1041003.80 00:15:26.583 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975793 00:15:26.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2975793) - No such process 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2975793 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.844 rmmod nvme_tcp 00:15:26.844 rmmod nvme_fabrics 00:15:26.844 rmmod nvme_keyring 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2974937 ']' 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2974937 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2974937 ']' 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2974937 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2974937 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2974937' 00:15:26.844 killing process with pid 2974937 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2974937 00:15:26.844 [2024-05-13 20:28:42.760995] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:26.844 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2974937 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.107 20:28:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.021 20:28:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.283 00:15:29.283 real 0m18.949s 00:15:29.283 user 0m31.077s 00:15:29.283 sys 0m6.937s 00:15:29.283 20:28:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.283 20:28:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:29.283 ************************************ 00:15:29.283 END TEST nvmf_delete_subsystem 00:15:29.283 ************************************ 00:15:29.283 20:28:45 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:29.283 20:28:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:29.283 20:28:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:29.283 20:28:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.283 ************************************ 00:15:29.283 START TEST nvmf_ns_masking 00:15:29.283 ************************************ 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:29.283 * Looking for test storage... 00:15:29.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.283 20:28:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=51a93e95-eb56-49c0-b2ff-5b68ac1b34f7 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.284 20:28:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:37.427 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:37.427 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:37.427 Found net devices under 0000:31:00.0: cvl_0_0 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:37.427 Found net devices under 0000:31:00.1: cvl_0_1 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.427 20:28:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:15:37.427 00:15:37.427 --- 10.0.0.2 ping statistics --- 00:15:37.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.427 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:15:37.427 00:15:37.427 --- 10.0.0.1 ping statistics --- 00:15:37.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.427 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2981263 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2981263 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2981263 ']' 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.427 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:37.428 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.428 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:37.428 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.428 [2024-05-13 20:28:53.154600] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:37.428 [2024-05-13 20:28:53.154661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.428 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.428 [2024-05-13 20:28:53.232883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.428 [2024-05-13 20:28:53.302829] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.428 [2024-05-13 20:28:53.302871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.428 [2024-05-13 20:28:53.302878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.428 [2024-05-13 20:28:53.302885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.428 [2024-05-13 20:28:53.302891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.428 [2024-05-13 20:28:53.302954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.428 [2024-05-13 20:28:53.303095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.428 [2024-05-13 20:28:53.303225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.428 [2024-05-13 20:28:53.303228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.999 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:37.999 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:37.999 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.999 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.999 20:28:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.259 20:28:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.259 20:28:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:38.259 [2024-05-13 20:28:54.104341] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.259 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:38.259 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:38.259 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:38.519 Malloc1 00:15:38.519 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:38.519 Malloc2 00:15:38.780 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:38.780 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:39.041 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.041 [2024-05-13 20:28:54.949025] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:39.041 [2024-05-13 20:28:54.949310] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.041 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:39.041 20:28:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 51a93e95-eb56-49c0-b2ff-5b68ac1b34f7 -a 10.0.0.2 -s 4420 -i 4 00:15:39.302 20:28:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:39.302 20:28:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:39.302 20:28:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.302 20:28:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:39.302 20:28:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:41.217 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:41.478 [ 0]:0x1 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7820fa67ee8b45d9b6ecdaee18753383 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7820fa67ee8b45d9b6ecdaee18753383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.478 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:41.739 [ 0]:0x1 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7820fa67ee8b45d9b6ecdaee18753383 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7820fa67ee8b45d9b6ecdaee18753383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:41.739 [ 1]:0x2 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.739 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.024 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:42.359 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:42.359 20:28:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 51a93e95-eb56-49c0-b2ff-5b68ac1b34f7 -a 10.0.0.2 -s 4420 -i 4 00:15:42.359 20:28:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:42.359 20:28:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:42.359 20:28:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.359 20:28:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:42.359 20:28:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:42.359 20:28:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:44.276 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:44.536 [ 0]:0x2 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.536 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.795 [ 0]:0x1 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7820fa67ee8b45d9b6ecdaee18753383 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7820fa67ee8b45d9b6ecdaee18753383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:44.795 [ 1]:0x2 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.795 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:45.055 [ 0]:0x2 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.055 20:29:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:45.315 20:29:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:45.315 20:29:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 51a93e95-eb56-49c0-b2ff-5b68ac1b34f7 -a 10.0.0.2 -s 4420 -i 4 00:15:45.575 20:29:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:45.575 20:29:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:45.575 20:29:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.575 20:29:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:45.575 20:29:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:45.575 20:29:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:47.483 [ 0]:0x1 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.483 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7820fa67ee8b45d9b6ecdaee18753383 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7820fa67ee8b45d9b6ecdaee18753383 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:47.743 [ 1]:0x2 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.743 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:48.002 [ 0]:0x2 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.002 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:48.003 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.003 [2024-05-13 20:29:03.937192] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:48.003 request: 00:15:48.003 { 00:15:48.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.003 "nsid": 2, 00:15:48.003 "host": "nqn.2016-06.io.spdk:host1", 00:15:48.003 "method": "nvmf_ns_remove_host", 00:15:48.003 "req_id": 1 00:15:48.003 } 00:15:48.003 Got JSON-RPC error response 00:15:48.003 response: 00:15:48.003 { 00:15:48.003 "code": -32602, 00:15:48.003 "message": "Invalid parameters" 00:15:48.003 } 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.263 20:29:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:48.263 [ 0]:0x2 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2126b2ab04374b5eb285ebb521c83031 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2126b2ab04374b5eb285ebb521c83031 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:48.263 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.264 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.524 rmmod nvme_tcp 00:15:48.524 rmmod nvme_fabrics 00:15:48.524 rmmod nvme_keyring 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2981263 ']' 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2981263 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2981263 ']' 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2981263 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2981263 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2981263' 00:15:48.524 killing process with pid 2981263 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2981263 00:15:48.524 [2024-05-13 20:29:04.419175] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:48.524 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2981263 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.784 20:29:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.328 20:29:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.328 00:15:51.328 real 0m21.594s 00:15:51.328 user 0m49.881s 00:15:51.328 sys 0m7.296s 00:15:51.328 20:29:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.328 20:29:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:51.328 ************************************ 00:15:51.328 END TEST nvmf_ns_masking 00:15:51.328 ************************************ 00:15:51.328 20:29:06 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:51.328 20:29:06 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:51.328 20:29:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:51.328 20:29:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.328 20:29:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.328 ************************************ 00:15:51.328 START TEST nvmf_nvme_cli 00:15:51.328 ************************************ 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:51.328 * Looking for test storage... 00:15:51.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.328 20:29:06 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.329 20:29:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:59.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:59.466 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.466 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:59.467 Found net devices under 0000:31:00.0: cvl_0_0 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:59.467 Found net devices under 0000:31:00.1: cvl_0_1 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:15:59.467 00:15:59.467 --- 10.0.0.2 ping statistics --- 00:15:59.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.467 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:15:59.467 00:15:59.467 --- 10.0.0.1 ping statistics --- 00:15:59.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.467 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2988162 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2988162 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2988162 ']' 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:59.467 20:29:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.467 [2024-05-13 20:29:14.823263] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:59.467 [2024-05-13 20:29:14.823332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.467 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.467 [2024-05-13 20:29:14.901416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.467 [2024-05-13 20:29:14.975270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.467 [2024-05-13 20:29:14.975318] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.467 [2024-05-13 20:29:14.975327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.467 [2024-05-13 20:29:14.975334] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.467 [2024-05-13 20:29:14.975339] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.467 [2024-05-13 20:29:14.975418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.467 [2024-05-13 20:29:14.975663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.467 [2024-05-13 20:29:14.975789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.467 [2024-05-13 20:29:14.975793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.732 [2024-05-13 20:29:15.656938] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.732 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 Malloc0 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 Malloc1 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 [2024-05-13 20:29:15.746525] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:59.993 [2024-05-13 20:29:15.746773] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:59.993 00:15:59.993 Discovery Log Number of Records 2, Generation counter 2 00:15:59.993 =====Discovery Log Entry 0====== 00:15:59.993 trtype: tcp 00:15:59.993 adrfam: ipv4 00:15:59.993 subtype: current discovery subsystem 00:15:59.993 treq: not required 00:15:59.993 portid: 0 00:15:59.993 trsvcid: 4420 00:15:59.993 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:59.993 traddr: 10.0.0.2 00:15:59.993 eflags: explicit discovery connections, duplicate discovery information 00:15:59.993 sectype: none 00:15:59.993 =====Discovery Log Entry 1====== 00:15:59.993 trtype: tcp 00:15:59.993 adrfam: ipv4 00:15:59.993 subtype: nvme subsystem 00:15:59.993 treq: not required 00:15:59.993 portid: 0 00:15:59.993 trsvcid: 4420 00:15:59.993 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:59.993 traddr: 10.0.0.2 00:15:59.993 eflags: none 00:15:59.993 sectype: none 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:59.993 20:29:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.904 20:29:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:01.904 20:29:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:16:01.904 20:29:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.904 20:29:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:16:01.904 20:29:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:16:01.904 20:29:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:03.819 /dev/nvme0n1 ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:03.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.819 rmmod nvme_tcp 00:16:03.819 rmmod nvme_fabrics 00:16:03.819 rmmod nvme_keyring 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2988162 ']' 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2988162 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2988162 ']' 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2988162 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:03.819 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2988162 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2988162' 00:16:04.081 killing process with pid 2988162 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2988162 00:16:04.081 [2024-05-13 20:29:19.807583] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2988162 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.081 20:29:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.631 20:29:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:06.631 00:16:06.631 real 0m15.307s 00:16:06.631 user 0m22.179s 00:16:06.631 sys 0m6.443s 00:16:06.631 20:29:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:06.631 20:29:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.631 ************************************ 00:16:06.631 END TEST nvmf_nvme_cli 00:16:06.631 ************************************ 00:16:06.631 20:29:22 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:06.631 20:29:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:06.631 20:29:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:06.631 20:29:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:06.631 20:29:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.631 ************************************ 00:16:06.631 START TEST nvmf_host_management 00:16:06.631 ************************************ 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:06.631 * Looking for test storage... 00:16:06.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.631 20:29:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:14.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:14.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:14.854 Found net devices under 0000:31:00.0: cvl_0_0 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:14.854 Found net devices under 0000:31:00.1: cvl_0_1 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.854 20:29:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:14.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:16:14.854 00:16:14.854 --- 10.0.0.2 ping statistics --- 00:16:14.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.854 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:16:14.854 00:16:14.854 --- 10.0.0.1 ping statistics --- 00:16:14.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.854 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:14.854 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2993884 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2993884 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2993884 ']' 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:14.855 20:29:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:14.855 [2024-05-13 20:29:30.371198] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:14.855 [2024-05-13 20:29:30.371260] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.855 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.855 [2024-05-13 20:29:30.468144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.855 [2024-05-13 20:29:30.563537] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.855 [2024-05-13 20:29:30.563603] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.855 [2024-05-13 20:29:30.563611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.855 [2024-05-13 20:29:30.563619] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.855 [2024-05-13 20:29:30.563625] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.855 [2024-05-13 20:29:30.563761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.855 [2024-05-13 20:29:30.563910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.855 [2024-05-13 20:29:30.564057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.855 [2024-05-13 20:29:30.564058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.426 [2024-05-13 20:29:31.202839] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.426 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.427 Malloc0 00:16:15.427 [2024-05-13 20:29:31.266045] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:15.427 [2024-05-13 20:29:31.266320] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2994197 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2994197 /var/tmp/bdevperf.sock 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2994197 ']' 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:15.427 { 00:16:15.427 "params": { 00:16:15.427 "name": "Nvme$subsystem", 00:16:15.427 "trtype": "$TEST_TRANSPORT", 00:16:15.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.427 "adrfam": "ipv4", 00:16:15.427 "trsvcid": "$NVMF_PORT", 00:16:15.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.427 "hdgst": ${hdgst:-false}, 00:16:15.427 "ddgst": ${ddgst:-false} 00:16:15.427 }, 00:16:15.427 "method": "bdev_nvme_attach_controller" 00:16:15.427 } 00:16:15.427 EOF 00:16:15.427 )") 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:15.427 20:29:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:15.427 "params": { 00:16:15.427 "name": "Nvme0", 00:16:15.427 "trtype": "tcp", 00:16:15.427 "traddr": "10.0.0.2", 00:16:15.427 "adrfam": "ipv4", 00:16:15.427 "trsvcid": "4420", 00:16:15.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:15.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:15.427 "hdgst": false, 00:16:15.427 "ddgst": false 00:16:15.427 }, 00:16:15.427 "method": "bdev_nvme_attach_controller" 00:16:15.427 }' 00:16:15.427 [2024-05-13 20:29:31.364587] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:15.427 [2024-05-13 20:29:31.364636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994197 ] 00:16:15.687 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.687 [2024-05-13 20:29:31.430273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.687 [2024-05-13 20:29:31.494876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.948 Running I/O for 10 seconds... 00:16:16.209 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:16.209 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:16.209 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:16.209 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.209 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.473 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 [2024-05-13 20:29:32.210224] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210304] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210318] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210325] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210331] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210338] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210345] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210351] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210358] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210364] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210370] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210376] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210382] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210389] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210395] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210401] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210408] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210414] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210420] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210426] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.473 [2024-05-13 20:29:32.210432] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210439] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210445] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210451] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210457] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210464] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210470] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210476] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210484] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210491] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210497] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210503] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210509] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210516] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210522] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210529] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210535] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210542] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210548] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210554] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210561] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210567] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210573] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210579] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210586] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210592] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210598] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210604] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210611] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210617] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210623] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210629] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210636] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210642] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210648] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210665] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210672] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210678] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210684] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.210691] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f580 is same with the state(5) to be set 00:16:16.474 [2024-05-13 20:29:32.211189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.474 [2024-05-13 20:29:32.211577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.474 [2024-05-13 20:29:32.211584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.211986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.211995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.475 [2024-05-13 20:29:32.212238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.475 [2024-05-13 20:29:32.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.476 [2024-05-13 20:29:32.212253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.476 [2024-05-13 20:29:32.212260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.476 [2024-05-13 20:29:32.212269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26575b0 is same with the state(5) to be set 00:16:16.476 [2024-05-13 20:29:32.212310] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26575b0 was disconnected and freed. reset controller. 00:16:16.476 [2024-05-13 20:29:32.213529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:16.476 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.476 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:16.476 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.476 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:16.476 task offset: 90112 on job bdev=Nvme0n1 fails 00:16:16.476 00:16:16.476 Latency(us) 00:16:16.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.476 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:16.476 Job: Nvme0n1 ended in about 0.53 seconds with error 00:16:16.476 Verification LBA range: start 0x0 length 0x400 00:16:16.476 Nvme0n1 : 0.53 1324.87 82.80 120.44 0.00 43184.05 4341.76 36044.80 00:16:16.476 =================================================================================================================== 00:16:16.476 Total : 1324.87 82.80 120.44 0.00 43184.05 4341.76 36044.80 00:16:16.476 [2024-05-13 20:29:32.215540] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:16.476 [2024-05-13 20:29:32.215565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226080 (9): Bad file descriptor 00:16:16.476 [2024-05-13 20:29:32.219038] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:16.476 [2024-05-13 20:29:32.219123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:16.476 [2024-05-13 20:29:32.219145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.476 [2024-05-13 20:29:32.219158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:16.476 [2024-05-13 20:29:32.219166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:16.476 [2024-05-13 20:29:32.219177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:16.476 [2024-05-13 20:29:32.219184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2226080 00:16:16.476 [2024-05-13 20:29:32.219203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226080 (9): Bad file descriptor 00:16:16.476 [2024-05-13 20:29:32.219214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:16.476 [2024-05-13 20:29:32.219221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:16.476 [2024-05-13 20:29:32.219229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:16.476 [2024-05-13 20:29:32.219241] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:16.476 20:29:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.476 20:29:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2994197 00:16:17.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2994197) - No such process 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:17.420 { 00:16:17.420 "params": { 00:16:17.420 "name": "Nvme$subsystem", 00:16:17.420 "trtype": "$TEST_TRANSPORT", 00:16:17.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.420 "adrfam": "ipv4", 00:16:17.420 "trsvcid": "$NVMF_PORT", 00:16:17.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.420 "hdgst": ${hdgst:-false}, 00:16:17.420 "ddgst": ${ddgst:-false} 00:16:17.420 }, 00:16:17.420 "method": "bdev_nvme_attach_controller" 00:16:17.420 } 00:16:17.420 EOF 00:16:17.420 )") 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:17.420 20:29:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:17.420 "params": { 00:16:17.420 "name": "Nvme0", 00:16:17.420 "trtype": "tcp", 00:16:17.420 "traddr": "10.0.0.2", 00:16:17.420 "adrfam": "ipv4", 00:16:17.420 "trsvcid": "4420", 00:16:17.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:17.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:17.420 "hdgst": false, 00:16:17.420 "ddgst": false 00:16:17.420 }, 00:16:17.420 "method": "bdev_nvme_attach_controller" 00:16:17.420 }' 00:16:17.420 [2024-05-13 20:29:33.282284] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:17.420 [2024-05-13 20:29:33.282374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994605 ] 00:16:17.420 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.420 [2024-05-13 20:29:33.349524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.681 [2024-05-13 20:29:33.413096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.681 Running I/O for 1 seconds... 00:16:19.067 00:16:19.067 Latency(us) 00:16:19.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.067 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:19.067 Verification LBA range: start 0x0 length 0x400 00:16:19.067 Nvme0n1 : 1.02 1448.38 90.52 0.00 0.00 43454.92 2785.28 34952.53 00:16:19.067 =================================================================================================================== 00:16:19.067 Total : 1448.38 90.52 0.00 0.00 43454.92 2785.28 34952.53 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.067 rmmod nvme_tcp 00:16:19.067 rmmod nvme_fabrics 00:16:19.067 rmmod nvme_keyring 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2993884 ']' 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2993884 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2993884 ']' 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2993884 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2993884 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2993884' 00:16:19.067 killing process with pid 2993884 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2993884 00:16:19.067 [2024-05-13 20:29:34.890989] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:19.067 20:29:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2993884 00:16:19.067 [2024-05-13 20:29:34.995642] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.327 20:29:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.299 20:29:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:21.299 20:29:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:21.299 00:16:21.299 real 0m14.962s 00:16:21.299 user 0m22.477s 00:16:21.299 sys 0m6.958s 00:16:21.299 20:29:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:21.299 20:29:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.299 ************************************ 00:16:21.299 END TEST nvmf_host_management 00:16:21.299 ************************************ 00:16:21.299 20:29:37 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:21.299 20:29:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:21.299 20:29:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:21.299 20:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.299 ************************************ 00:16:21.299 START TEST nvmf_lvol 00:16:21.299 ************************************ 00:16:21.299 20:29:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:21.560 * Looking for test storage... 00:16:21.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.560 20:29:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:29.699 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:29.699 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:29.699 Found net devices under 0000:31:00.0: cvl_0_0 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:29.699 Found net devices under 0000:31:00.1: cvl_0_1 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.699 20:29:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.699 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.699 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:29.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:16:29.700 00:16:29.700 --- 10.0.0.2 ping statistics --- 00:16:29.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.700 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:16:29.700 00:16:29.700 --- 10.0.0.1 ping statistics --- 00:16:29.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.700 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2999500 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2999500 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2999500 ']' 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:29.700 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:29.700 [2024-05-13 20:29:45.150079] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:29.700 [2024-05-13 20:29:45.150155] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.700 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.700 [2024-05-13 20:29:45.228945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:29.700 [2024-05-13 20:29:45.303375] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.700 [2024-05-13 20:29:45.303416] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.700 [2024-05-13 20:29:45.303424] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.700 [2024-05-13 20:29:45.303430] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.700 [2024-05-13 20:29:45.303435] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.700 [2024-05-13 20:29:45.303592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.700 [2024-05-13 20:29:45.303724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.700 [2024-05-13 20:29:45.303727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.272 20:29:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:30.272 [2024-05-13 20:29:46.104120] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.272 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:30.533 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:30.533 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:30.533 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:30.533 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:30.795 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:31.056 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d5bfed65-9e37-4f0d-a51a-518795e053b7 00:16:31.056 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5bfed65-9e37-4f0d-a51a-518795e053b7 lvol 20 00:16:31.056 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=11a04b87-3d59-4567-92ed-65b1e4923dda 00:16:31.056 20:29:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:31.318 20:29:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11a04b87-3d59-4567-92ed-65b1e4923dda 00:16:31.579 20:29:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:31.579 [2024-05-13 20:29:47.415561] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:31.579 [2024-05-13 20:29:47.415825] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.579 20:29:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.840 20:29:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2999998 00:16:31.840 20:29:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:31.840 20:29:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:31.840 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.785 20:29:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 11a04b87-3d59-4567-92ed-65b1e4923dda MY_SNAPSHOT 00:16:33.046 20:29:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=25e36dd5-21ea-4489-90c8-df83c6498c13 00:16:33.046 20:29:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 11a04b87-3d59-4567-92ed-65b1e4923dda 30 00:16:33.307 20:29:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 25e36dd5-21ea-4489-90c8-df83c6498c13 MY_CLONE 00:16:33.307 20:29:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a83a457d-4408-4ed3-9b62-92299d689954 00:16:33.307 20:29:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a83a457d-4408-4ed3-9b62-92299d689954 00:16:33.879 20:29:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2999998 00:16:42.022 Initializing NVMe Controllers 00:16:42.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:42.022 Controller IO queue size 128, less than required. 00:16:42.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:42.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:42.022 Initialization complete. Launching workers. 00:16:42.022 ======================================================== 00:16:42.022 Latency(us) 00:16:42.022 Device Information : IOPS MiB/s Average min max 00:16:42.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12956.20 50.61 9881.27 1642.69 55913.69 00:16:42.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18477.20 72.18 6927.40 2083.71 49782.26 00:16:42.022 ======================================================== 00:16:42.022 Total : 31433.40 122.79 8144.92 1642.69 55913.69 00:16:42.022 00:16:42.022 20:29:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:42.283 20:29:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11a04b87-3d59-4567-92ed-65b1e4923dda 00:16:42.544 20:29:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5bfed65-9e37-4f0d-a51a-518795e053b7 00:16:42.544 20:29:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:42.544 20:29:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:42.544 20:29:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:42.544 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.805 rmmod nvme_tcp 00:16:42.805 rmmod nvme_fabrics 00:16:42.805 rmmod nvme_keyring 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2999500 ']' 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2999500 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2999500 ']' 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2999500 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2999500 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2999500' 00:16:42.805 killing process with pid 2999500 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2999500 00:16:42.805 [2024-05-13 20:29:58.610161] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:42.805 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2999500 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.066 20:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.980 20:30:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.980 00:16:44.980 real 0m23.661s 00:16:44.980 user 1m3.605s 00:16:44.980 sys 0m8.066s 00:16:44.980 20:30:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.980 20:30:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:44.980 ************************************ 00:16:44.980 END TEST nvmf_lvol 00:16:44.980 ************************************ 00:16:44.980 20:30:00 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:44.980 20:30:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:44.980 20:30:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.980 20:30:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 ************************************ 00:16:45.241 START TEST nvmf_lvs_grow 00:16:45.241 ************************************ 00:16:45.241 20:30:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:45.241 * Looking for test storage... 00:16:45.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.241 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.242 20:30:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:53.394 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:53.394 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:53.394 Found net devices under 0000:31:00.0: cvl_0_0 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:53.394 Found net devices under 0000:31:00.1: cvl_0_1 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.394 20:30:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:16:53.395 00:16:53.395 --- 10.0.0.2 ping statistics --- 00:16:53.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.395 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:16:53.395 00:16:53.395 --- 10.0.0.1 ping statistics --- 00:16:53.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.395 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3007337 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3007337 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3007337 ']' 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:53.395 20:30:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:53.395 [2024-05-13 20:30:09.328911] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:53.395 [2024-05-13 20:30:09.328961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.657 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.657 [2024-05-13 20:30:09.401995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.657 [2024-05-13 20:30:09.466260] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.657 [2024-05-13 20:30:09.466295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.657 [2024-05-13 20:30:09.466303] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.657 [2024-05-13 20:30:09.466310] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.657 [2024-05-13 20:30:09.466320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.657 [2024-05-13 20:30:09.466343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.230 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:54.231 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:54.231 20:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.231 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.231 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:54.231 20:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.231 20:30:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:54.492 [2024-05-13 20:30:10.281894] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 ************************************ 00:16:54.492 START TEST lvs_grow_clean 00:16:54.492 ************************************ 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.492 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.754 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:54.754 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:55.015 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:16:55.015 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:16:55.015 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:55.015 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:55.015 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:55.015 20:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 lvol 150 00:16:55.276 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=adb21d6a-0034-4451-aabe-09c658d050c6 00:16:55.276 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.276 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:55.276 [2024-05-13 20:30:11.178752] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:55.276 [2024-05-13 20:30:11.178807] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:55.276 true 00:16:55.276 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:16:55.276 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:55.536 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:55.536 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:55.797 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 adb21d6a-0034-4451-aabe-09c658d050c6 00:16:55.797 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.058 [2024-05-13 20:30:11.812468] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:56.058 [2024-05-13 20:30:11.812694] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3007969 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3007969 /var/tmp/bdevperf.sock 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3007969 ']' 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.058 20:30:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:56.319 [2024-05-13 20:30:12.024601] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:56.319 [2024-05-13 20:30:12.024650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007969 ] 00:16:56.319 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.319 [2024-05-13 20:30:12.108289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.319 [2024-05-13 20:30:12.172433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.891 20:30:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.891 20:30:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:56.891 20:30:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:57.465 Nvme0n1 00:16:57.465 20:30:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:57.465 [ 00:16:57.465 { 00:16:57.465 "name": "Nvme0n1", 00:16:57.465 "aliases": [ 00:16:57.465 "adb21d6a-0034-4451-aabe-09c658d050c6" 00:16:57.465 ], 00:16:57.465 "product_name": "NVMe disk", 00:16:57.465 "block_size": 4096, 00:16:57.465 "num_blocks": 38912, 00:16:57.465 "uuid": "adb21d6a-0034-4451-aabe-09c658d050c6", 00:16:57.465 "assigned_rate_limits": { 00:16:57.465 "rw_ios_per_sec": 0, 00:16:57.465 "rw_mbytes_per_sec": 0, 00:16:57.465 "r_mbytes_per_sec": 0, 00:16:57.465 "w_mbytes_per_sec": 0 00:16:57.465 }, 00:16:57.465 "claimed": false, 00:16:57.465 "zoned": false, 00:16:57.465 "supported_io_types": { 00:16:57.465 "read": true, 00:16:57.465 "write": true, 00:16:57.465 "unmap": true, 00:16:57.465 "write_zeroes": true, 00:16:57.465 "flush": true, 00:16:57.465 "reset": true, 00:16:57.465 "compare": true, 00:16:57.465 "compare_and_write": true, 00:16:57.465 "abort": true, 00:16:57.465 "nvme_admin": true, 00:16:57.465 "nvme_io": true 00:16:57.465 }, 00:16:57.465 "memory_domains": [ 00:16:57.465 { 00:16:57.465 "dma_device_id": "system", 00:16:57.465 "dma_device_type": 1 00:16:57.465 } 00:16:57.465 ], 00:16:57.465 "driver_specific": { 00:16:57.465 "nvme": [ 00:16:57.465 { 00:16:57.465 "trid": { 00:16:57.465 "trtype": "TCP", 00:16:57.465 "adrfam": "IPv4", 00:16:57.465 "traddr": "10.0.0.2", 00:16:57.465 "trsvcid": "4420", 00:16:57.465 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:57.465 }, 00:16:57.465 "ctrlr_data": { 00:16:57.465 "cntlid": 1, 00:16:57.465 "vendor_id": "0x8086", 00:16:57.465 "model_number": "SPDK bdev Controller", 00:16:57.465 "serial_number": "SPDK0", 00:16:57.465 "firmware_revision": "24.05", 00:16:57.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.465 "oacs": { 00:16:57.465 "security": 0, 00:16:57.465 "format": 0, 00:16:57.465 "firmware": 0, 00:16:57.465 "ns_manage": 0 00:16:57.465 }, 00:16:57.465 "multi_ctrlr": true, 00:16:57.465 "ana_reporting": false 00:16:57.465 }, 00:16:57.465 "vs": { 00:16:57.465 "nvme_version": "1.3" 00:16:57.465 }, 00:16:57.465 "ns_data": { 00:16:57.465 "id": 1, 00:16:57.465 "can_share": true 00:16:57.465 } 00:16:57.465 } 00:16:57.465 ], 00:16:57.465 "mp_policy": "active_passive" 00:16:57.465 } 00:16:57.465 } 00:16:57.465 ] 00:16:57.465 20:30:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3008305 00:16:57.465 20:30:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:57.465 20:30:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.465 Running I/O for 10 seconds... 00:16:58.851 Latency(us) 00:16:58.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.851 Nvme0n1 : 1.00 18698.00 73.04 0.00 0.00 0.00 0.00 0.00 00:16:58.851 =================================================================================================================== 00:16:58.851 Total : 18698.00 73.04 0.00 0.00 0.00 0.00 0.00 00:16:58.851 00:16:59.421 20:30:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:16:59.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.681 Nvme0n1 : 2.00 18821.00 73.52 0.00 0.00 0.00 0.00 0.00 00:16:59.681 =================================================================================================================== 00:16:59.681 Total : 18821.00 73.52 0.00 0.00 0.00 0.00 0.00 00:16:59.681 00:16:59.681 true 00:16:59.681 20:30:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:16:59.681 20:30:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:59.682 20:30:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:59.682 20:30:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:59.682 20:30:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3008305 00:17:00.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.620 Nvme0n1 : 3.00 18882.67 73.76 0.00 0.00 0.00 0.00 0.00 00:17:00.620 =================================================================================================================== 00:17:00.620 Total : 18882.67 73.76 0.00 0.00 0.00 0.00 0.00 00:17:00.620 00:17:01.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.576 Nvme0n1 : 4.00 18930.00 73.95 0.00 0.00 0.00 0.00 0.00 00:17:01.576 =================================================================================================================== 00:17:01.576 Total : 18930.00 73.95 0.00 0.00 0.00 0.00 0.00 00:17:01.576 00:17:02.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.559 Nvme0n1 : 5.00 18948.60 74.02 0.00 0.00 0.00 0.00 0.00 00:17:02.559 =================================================================================================================== 00:17:02.559 Total : 18948.60 74.02 0.00 0.00 0.00 0.00 0.00 00:17:02.559 00:17:03.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.503 Nvme0n1 : 6.00 18974.83 74.12 0.00 0.00 0.00 0.00 0.00 00:17:03.503 =================================================================================================================== 00:17:03.503 Total : 18974.83 74.12 0.00 0.00 0.00 0.00 0.00 00:17:03.503 00:17:04.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.446 Nvme0n1 : 7.00 18990.00 74.18 0.00 0.00 0.00 0.00 0.00 00:17:04.446 =================================================================================================================== 00:17:04.446 Total : 18990.00 74.18 0.00 0.00 0.00 0.00 0.00 00:17:04.446 00:17:05.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.852 Nvme0n1 : 8.00 18999.00 74.21 0.00 0.00 0.00 0.00 0.00 00:17:05.852 =================================================================================================================== 00:17:05.852 Total : 18999.00 74.21 0.00 0.00 0.00 0.00 0.00 00:17:05.852 00:17:06.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.791 Nvme0n1 : 9.00 19019.67 74.30 0.00 0.00 0.00 0.00 0.00 00:17:06.791 =================================================================================================================== 00:17:06.791 Total : 19019.67 74.30 0.00 0.00 0.00 0.00 0.00 00:17:06.791 00:17:07.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.735 Nvme0n1 : 10.00 19029.10 74.33 0.00 0.00 0.00 0.00 0.00 00:17:07.735 =================================================================================================================== 00:17:07.735 Total : 19029.10 74.33 0.00 0.00 0.00 0.00 0.00 00:17:07.735 00:17:07.735 00:17:07.735 Latency(us) 00:17:07.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.735 Nvme0n1 : 10.01 19030.08 74.34 0.00 0.00 6722.12 4123.31 17913.17 00:17:07.735 =================================================================================================================== 00:17:07.735 Total : 19030.08 74.34 0.00 0.00 6722.12 4123.31 17913.17 00:17:07.735 0 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3007969 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3007969 ']' 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3007969 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3007969 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3007969' 00:17:07.735 killing process with pid 3007969 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3007969 00:17:07.735 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.735 00:17:07.735 Latency(us) 00:17:07.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.735 =================================================================================================================== 00:17:07.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3007969 00:17:07.735 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:07.997 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:07.997 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:07.997 20:30:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:08.256 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:08.256 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:08.256 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.518 [2024-05-13 20:30:24.203425] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:08.518 request: 00:17:08.518 { 00:17:08.518 "uuid": "e6f389b5-9d8c-4bd9-a538-9220c06a2470", 00:17:08.518 "method": "bdev_lvol_get_lvstores", 00:17:08.518 "req_id": 1 00:17:08.518 } 00:17:08.518 Got JSON-RPC error response 00:17:08.518 response: 00:17:08.518 { 00:17:08.518 "code": -19, 00:17:08.518 "message": "No such device" 00:17:08.518 } 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.518 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:08.779 aio_bdev 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev adb21d6a-0034-4451-aabe-09c658d050c6 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=adb21d6a-0034-4451-aabe-09c658d050c6 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.779 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b adb21d6a-0034-4451-aabe-09c658d050c6 -t 2000 00:17:09.040 [ 00:17:09.040 { 00:17:09.040 "name": "adb21d6a-0034-4451-aabe-09c658d050c6", 00:17:09.040 "aliases": [ 00:17:09.040 "lvs/lvol" 00:17:09.040 ], 00:17:09.040 "product_name": "Logical Volume", 00:17:09.040 "block_size": 4096, 00:17:09.040 "num_blocks": 38912, 00:17:09.040 "uuid": "adb21d6a-0034-4451-aabe-09c658d050c6", 00:17:09.040 "assigned_rate_limits": { 00:17:09.040 "rw_ios_per_sec": 0, 00:17:09.040 "rw_mbytes_per_sec": 0, 00:17:09.040 "r_mbytes_per_sec": 0, 00:17:09.040 "w_mbytes_per_sec": 0 00:17:09.040 }, 00:17:09.040 "claimed": false, 00:17:09.040 "zoned": false, 00:17:09.040 "supported_io_types": { 00:17:09.040 "read": true, 00:17:09.040 "write": true, 00:17:09.040 "unmap": true, 00:17:09.041 "write_zeroes": true, 00:17:09.041 "flush": false, 00:17:09.041 "reset": true, 00:17:09.041 "compare": false, 00:17:09.041 "compare_and_write": false, 00:17:09.041 "abort": false, 00:17:09.041 "nvme_admin": false, 00:17:09.041 "nvme_io": false 00:17:09.041 }, 00:17:09.041 "driver_specific": { 00:17:09.041 "lvol": { 00:17:09.041 "lvol_store_uuid": "e6f389b5-9d8c-4bd9-a538-9220c06a2470", 00:17:09.041 "base_bdev": "aio_bdev", 00:17:09.041 "thin_provision": false, 00:17:09.041 "num_allocated_clusters": 38, 00:17:09.041 "snapshot": false, 00:17:09.041 "clone": false, 00:17:09.041 "esnap_clone": false 00:17:09.041 } 00:17:09.041 } 00:17:09.041 } 00:17:09.041 ] 00:17:09.041 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:09.041 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:09.041 20:30:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:09.303 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:09.303 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:09.303 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:09.303 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:09.303 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete adb21d6a-0034-4451-aabe-09c658d050c6 00:17:09.564 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6f389b5-9d8c-4bd9-a538-9220c06a2470 00:17:09.564 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.826 00:17:09.826 real 0m15.282s 00:17:09.826 user 0m15.054s 00:17:09.826 sys 0m1.259s 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:09.826 ************************************ 00:17:09.826 END TEST lvs_grow_clean 00:17:09.826 ************************************ 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:09.826 ************************************ 00:17:09.826 START TEST lvs_grow_dirty 00:17:09.826 ************************************ 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.826 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.088 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:10.088 20:30:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:10.350 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=73835737-73c2-48c4-841d-0399b4530e8f 00:17:10.350 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:10.350 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:10.350 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:10.350 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:10.350 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 73835737-73c2-48c4-841d-0399b4530e8f lvol 150 00:17:10.612 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:10.612 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.612 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:10.612 [2024-05-13 20:30:26.499367] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:10.612 [2024-05-13 20:30:26.499420] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:10.612 true 00:17:10.612 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:10.612 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:10.873 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:10.873 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.136 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:11.136 20:30:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.397 [2024-05-13 20:30:27.121238] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3011045 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3011045 /var/tmp/bdevperf.sock 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3011045 ']' 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.397 20:30:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:11.397 [2024-05-13 20:30:27.333887] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:11.397 [2024-05-13 20:30:27.333937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011045 ] 00:17:11.658 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.658 [2024-05-13 20:30:27.413142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.658 [2024-05-13 20:30:27.466775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.233 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.233 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:12.233 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:12.495 Nvme0n1 00:17:12.495 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:12.756 [ 00:17:12.756 { 00:17:12.756 "name": "Nvme0n1", 00:17:12.756 "aliases": [ 00:17:12.756 "66eb38ed-1629-4463-a961-dff7a0d9ad74" 00:17:12.756 ], 00:17:12.756 "product_name": "NVMe disk", 00:17:12.756 "block_size": 4096, 00:17:12.756 "num_blocks": 38912, 00:17:12.756 "uuid": "66eb38ed-1629-4463-a961-dff7a0d9ad74", 00:17:12.756 "assigned_rate_limits": { 00:17:12.756 "rw_ios_per_sec": 0, 00:17:12.756 "rw_mbytes_per_sec": 0, 00:17:12.756 "r_mbytes_per_sec": 0, 00:17:12.756 "w_mbytes_per_sec": 0 00:17:12.756 }, 00:17:12.756 "claimed": false, 00:17:12.756 "zoned": false, 00:17:12.756 "supported_io_types": { 00:17:12.756 "read": true, 00:17:12.756 "write": true, 00:17:12.756 "unmap": true, 00:17:12.756 "write_zeroes": true, 00:17:12.756 "flush": true, 00:17:12.756 "reset": true, 00:17:12.756 "compare": true, 00:17:12.756 "compare_and_write": true, 00:17:12.756 "abort": true, 00:17:12.756 "nvme_admin": true, 00:17:12.756 "nvme_io": true 00:17:12.756 }, 00:17:12.756 "memory_domains": [ 00:17:12.756 { 00:17:12.756 "dma_device_id": "system", 00:17:12.756 "dma_device_type": 1 00:17:12.756 } 00:17:12.756 ], 00:17:12.756 "driver_specific": { 00:17:12.756 "nvme": [ 00:17:12.756 { 00:17:12.756 "trid": { 00:17:12.756 "trtype": "TCP", 00:17:12.756 "adrfam": "IPv4", 00:17:12.756 "traddr": "10.0.0.2", 00:17:12.756 "trsvcid": "4420", 00:17:12.756 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:12.756 }, 00:17:12.756 "ctrlr_data": { 00:17:12.756 "cntlid": 1, 00:17:12.756 "vendor_id": "0x8086", 00:17:12.756 "model_number": "SPDK bdev Controller", 00:17:12.756 "serial_number": "SPDK0", 00:17:12.756 "firmware_revision": "24.05", 00:17:12.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:12.756 "oacs": { 00:17:12.756 "security": 0, 00:17:12.756 "format": 0, 00:17:12.756 "firmware": 0, 00:17:12.757 "ns_manage": 0 00:17:12.757 }, 00:17:12.757 "multi_ctrlr": true, 00:17:12.757 "ana_reporting": false 00:17:12.757 }, 00:17:12.757 "vs": { 00:17:12.757 "nvme_version": "1.3" 00:17:12.757 }, 00:17:12.757 "ns_data": { 00:17:12.757 "id": 1, 00:17:12.757 "can_share": true 00:17:12.757 } 00:17:12.757 } 00:17:12.757 ], 00:17:12.757 "mp_policy": "active_passive" 00:17:12.757 } 00:17:12.757 } 00:17:12.757 ] 00:17:12.757 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3011363 00:17:12.757 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:12.757 20:30:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.757 Running I/O for 10 seconds... 00:17:13.699 Latency(us) 00:17:13.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.699 Nvme0n1 : 1.00 18776.00 73.34 0.00 0.00 0.00 0.00 0.00 00:17:13.699 =================================================================================================================== 00:17:13.699 Total : 18776.00 73.34 0.00 0.00 0.00 0.00 0.00 00:17:13.699 00:17:14.644 20:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:14.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.644 Nvme0n1 : 2.00 18828.00 73.55 0.00 0.00 0.00 0.00 0.00 00:17:14.644 =================================================================================================================== 00:17:14.644 Total : 18828.00 73.55 0.00 0.00 0.00 0.00 0.00 00:17:14.644 00:17:14.905 true 00:17:14.905 20:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:14.905 20:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:14.905 20:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:14.905 20:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:14.905 20:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3011363 00:17:15.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.847 Nvme0n1 : 3.00 18866.33 73.70 0.00 0.00 0.00 0.00 0.00 00:17:15.847 =================================================================================================================== 00:17:15.847 Total : 18866.33 73.70 0.00 0.00 0.00 0.00 0.00 00:17:15.847 00:17:16.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.790 Nvme0n1 : 4.00 18901.50 73.83 0.00 0.00 0.00 0.00 0.00 00:17:16.790 =================================================================================================================== 00:17:16.790 Total : 18901.50 73.83 0.00 0.00 0.00 0.00 0.00 00:17:16.790 00:17:17.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.733 Nvme0n1 : 5.00 18923.00 73.92 0.00 0.00 0.00 0.00 0.00 00:17:17.733 =================================================================================================================== 00:17:17.733 Total : 18923.00 73.92 0.00 0.00 0.00 0.00 0.00 00:17:17.733 00:17:18.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.674 Nvme0n1 : 6.00 18947.67 74.01 0.00 0.00 0.00 0.00 0.00 00:17:18.674 =================================================================================================================== 00:17:18.674 Total : 18947.67 74.01 0.00 0.00 0.00 0.00 0.00 00:17:18.674 00:17:20.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.059 Nvme0n1 : 7.00 18965.29 74.08 0.00 0.00 0.00 0.00 0.00 00:17:20.059 =================================================================================================================== 00:17:20.059 Total : 18965.29 74.08 0.00 0.00 0.00 0.00 0.00 00:17:20.059 00:17:20.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.631 Nvme0n1 : 8.00 18978.62 74.14 0.00 0.00 0.00 0.00 0.00 00:17:20.631 =================================================================================================================== 00:17:20.631 Total : 18978.62 74.14 0.00 0.00 0.00 0.00 0.00 00:17:20.631 00:17:22.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.032 Nvme0n1 : 9.00 18988.89 74.18 0.00 0.00 0.00 0.00 0.00 00:17:22.032 =================================================================================================================== 00:17:22.032 Total : 18988.89 74.18 0.00 0.00 0.00 0.00 0.00 00:17:22.032 00:17:22.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.976 Nvme0n1 : 10.00 18997.20 74.21 0.00 0.00 0.00 0.00 0.00 00:17:22.976 =================================================================================================================== 00:17:22.976 Total : 18997.20 74.21 0.00 0.00 0.00 0.00 0.00 00:17:22.976 00:17:22.976 00:17:22.976 Latency(us) 00:17:22.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.976 Nvme0n1 : 10.01 18997.27 74.21 0.00 0.00 6733.86 1617.92 11960.32 00:17:22.976 =================================================================================================================== 00:17:22.976 Total : 18997.27 74.21 0.00 0.00 6733.86 1617.92 11960.32 00:17:22.976 0 00:17:22.976 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3011045 00:17:22.976 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3011045 ']' 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3011045 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3011045 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3011045' 00:17:22.977 killing process with pid 3011045 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3011045 00:17:22.977 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.977 00:17:22.977 Latency(us) 00:17:22.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.977 =================================================================================================================== 00:17:22.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3011045 00:17:22.977 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.237 20:30:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.237 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:23.237 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3007337 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3007337 00:17:23.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3007337 Killed "${NVMF_APP[@]}" "$@" 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3013405 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3013405 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3013405 ']' 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:23.499 20:30:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:23.499 [2024-05-13 20:30:39.396644] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:23.499 [2024-05-13 20:30:39.396697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.499 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.760 [2024-05-13 20:30:39.468559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.760 [2024-05-13 20:30:39.532778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.760 [2024-05-13 20:30:39.532814] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.760 [2024-05-13 20:30:39.532821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.760 [2024-05-13 20:30:39.532828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.760 [2024-05-13 20:30:39.532836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.760 [2024-05-13 20:30:39.532859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.332 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:24.593 [2024-05-13 20:30:40.337731] blobstore.c:4805:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:24.593 [2024-05-13 20:30:40.337820] blobstore.c:4752:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:24.593 [2024-05-13 20:30:40.337848] blobstore.c:4752:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:24.593 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66eb38ed-1629-4463-a961-dff7a0d9ad74 -t 2000 00:17:24.854 [ 00:17:24.854 { 00:17:24.854 "name": "66eb38ed-1629-4463-a961-dff7a0d9ad74", 00:17:24.854 "aliases": [ 00:17:24.854 "lvs/lvol" 00:17:24.854 ], 00:17:24.854 "product_name": "Logical Volume", 00:17:24.854 "block_size": 4096, 00:17:24.854 "num_blocks": 38912, 00:17:24.854 "uuid": "66eb38ed-1629-4463-a961-dff7a0d9ad74", 00:17:24.854 "assigned_rate_limits": { 00:17:24.854 "rw_ios_per_sec": 0, 00:17:24.854 "rw_mbytes_per_sec": 0, 00:17:24.854 "r_mbytes_per_sec": 0, 00:17:24.854 "w_mbytes_per_sec": 0 00:17:24.854 }, 00:17:24.854 "claimed": false, 00:17:24.854 "zoned": false, 00:17:24.854 "supported_io_types": { 00:17:24.854 "read": true, 00:17:24.854 "write": true, 00:17:24.854 "unmap": true, 00:17:24.854 "write_zeroes": true, 00:17:24.854 "flush": false, 00:17:24.854 "reset": true, 00:17:24.854 "compare": false, 00:17:24.854 "compare_and_write": false, 00:17:24.854 "abort": false, 00:17:24.854 "nvme_admin": false, 00:17:24.854 "nvme_io": false 00:17:24.854 }, 00:17:24.854 "driver_specific": { 00:17:24.854 "lvol": { 00:17:24.854 "lvol_store_uuid": "73835737-73c2-48c4-841d-0399b4530e8f", 00:17:24.854 "base_bdev": "aio_bdev", 00:17:24.854 "thin_provision": false, 00:17:24.854 "num_allocated_clusters": 38, 00:17:24.854 "snapshot": false, 00:17:24.854 "clone": false, 00:17:24.854 "esnap_clone": false 00:17:24.854 } 00:17:24.854 } 00:17:24.854 } 00:17:24.854 ] 00:17:24.854 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:24.854 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:24.854 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:25.116 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:25.116 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:25.116 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:25.116 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:25.116 20:30:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:25.378 [2024-05-13 20:30:41.085589] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:25.378 request: 00:17:25.378 { 00:17:25.378 "uuid": "73835737-73c2-48c4-841d-0399b4530e8f", 00:17:25.378 "method": "bdev_lvol_get_lvstores", 00:17:25.378 "req_id": 1 00:17:25.378 } 00:17:25.378 Got JSON-RPC error response 00:17:25.378 response: 00:17:25.378 { 00:17:25.378 "code": -19, 00:17:25.378 "message": "No such device" 00:17:25.378 } 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:25.378 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:25.639 aio_bdev 00:17:25.639 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:25.639 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:25.640 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:25.640 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:25.640 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:25.640 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:25.640 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:25.901 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66eb38ed-1629-4463-a961-dff7a0d9ad74 -t 2000 00:17:25.901 [ 00:17:25.901 { 00:17:25.901 "name": "66eb38ed-1629-4463-a961-dff7a0d9ad74", 00:17:25.901 "aliases": [ 00:17:25.901 "lvs/lvol" 00:17:25.901 ], 00:17:25.901 "product_name": "Logical Volume", 00:17:25.901 "block_size": 4096, 00:17:25.901 "num_blocks": 38912, 00:17:25.901 "uuid": "66eb38ed-1629-4463-a961-dff7a0d9ad74", 00:17:25.901 "assigned_rate_limits": { 00:17:25.901 "rw_ios_per_sec": 0, 00:17:25.901 "rw_mbytes_per_sec": 0, 00:17:25.901 "r_mbytes_per_sec": 0, 00:17:25.901 "w_mbytes_per_sec": 0 00:17:25.901 }, 00:17:25.901 "claimed": false, 00:17:25.901 "zoned": false, 00:17:25.901 "supported_io_types": { 00:17:25.901 "read": true, 00:17:25.901 "write": true, 00:17:25.901 "unmap": true, 00:17:25.901 "write_zeroes": true, 00:17:25.901 "flush": false, 00:17:25.901 "reset": true, 00:17:25.901 "compare": false, 00:17:25.901 "compare_and_write": false, 00:17:25.901 "abort": false, 00:17:25.901 "nvme_admin": false, 00:17:25.901 "nvme_io": false 00:17:25.901 }, 00:17:25.901 "driver_specific": { 00:17:25.901 "lvol": { 00:17:25.901 "lvol_store_uuid": "73835737-73c2-48c4-841d-0399b4530e8f", 00:17:25.901 "base_bdev": "aio_bdev", 00:17:25.901 "thin_provision": false, 00:17:25.901 "num_allocated_clusters": 38, 00:17:25.901 "snapshot": false, 00:17:25.901 "clone": false, 00:17:25.901 "esnap_clone": false 00:17:25.901 } 00:17:25.901 } 00:17:25.901 } 00:17:25.901 ] 00:17:25.901 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:25.901 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:25.901 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:26.162 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:26.162 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:26.162 20:30:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:26.162 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:26.162 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66eb38ed-1629-4463-a961-dff7a0d9ad74 00:17:26.423 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73835737-73c2-48c4-841d-0399b4530e8f 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.685 00:17:26.685 real 0m16.839s 00:17:26.685 user 0m44.249s 00:17:26.685 sys 0m2.859s 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.685 ************************************ 00:17:26.685 END TEST lvs_grow_dirty 00:17:26.685 ************************************ 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:26.685 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:26.685 nvmf_trace.0 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.946 rmmod nvme_tcp 00:17:26.946 rmmod nvme_fabrics 00:17:26.946 rmmod nvme_keyring 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3013405 ']' 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3013405 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3013405 ']' 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3013405 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3013405 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3013405' 00:17:26.946 killing process with pid 3013405 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3013405 00:17:26.946 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3013405 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.208 20:30:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.155 20:30:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.155 00:17:29.155 real 0m44.046s 00:17:29.155 user 1m5.585s 00:17:29.155 sys 0m10.516s 00:17:29.155 20:30:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.155 20:30:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.155 ************************************ 00:17:29.155 END TEST nvmf_lvs_grow 00:17:29.155 ************************************ 00:17:29.155 20:30:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:29.155 20:30:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:29.155 20:30:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:29.155 20:30:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:29.155 ************************************ 00:17:29.155 START TEST nvmf_bdev_io_wait 00:17:29.155 ************************************ 00:17:29.155 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:29.416 * Looking for test storage... 00:17:29.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.416 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.417 20:30:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:37.563 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:37.563 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:37.563 Found net devices under 0000:31:00.0: cvl_0_0 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:37.563 Found net devices under 0000:31:00.1: cvl_0_1 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.563 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:17:37.564 00:17:37.564 --- 10.0.0.2 ping statistics --- 00:17:37.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.564 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:17:37.564 00:17:37.564 --- 10.0.0.1 ping statistics --- 00:17:37.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.564 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.564 20:30:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3018815 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3018815 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3018815 ']' 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.564 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.564 [2024-05-13 20:30:53.065528] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:37.564 [2024-05-13 20:30:53.065575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.564 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.564 [2024-05-13 20:30:53.139943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.564 [2024-05-13 20:30:53.205719] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.564 [2024-05-13 20:30:53.205759] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.564 [2024-05-13 20:30:53.205766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.564 [2024-05-13 20:30:53.205773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.564 [2024-05-13 20:30:53.205779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.564 [2024-05-13 20:30:53.205919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.564 [2024-05-13 20:30:53.206048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.564 [2024-05-13 20:30:53.206066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.564 [2024-05-13 20:30:53.206079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 [2024-05-13 20:30:53.942220] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 Malloc0 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 [2024-05-13 20:30:54.013395] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:38.135 [2024-05-13 20:30:54.013643] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3018969 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3018972 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:38.135 { 00:17:38.135 "params": { 00:17:38.135 "name": "Nvme$subsystem", 00:17:38.135 "trtype": "$TEST_TRANSPORT", 00:17:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.135 "adrfam": "ipv4", 00:17:38.135 "trsvcid": "$NVMF_PORT", 00:17:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.135 "hdgst": ${hdgst:-false}, 00:17:38.135 "ddgst": ${ddgst:-false} 00:17:38.135 }, 00:17:38.135 "method": "bdev_nvme_attach_controller" 00:17:38.135 } 00:17:38.135 EOF 00:17:38.135 )") 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3018975 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3018978 00:17:38.135 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:38.135 { 00:17:38.135 "params": { 00:17:38.135 "name": "Nvme$subsystem", 00:17:38.135 "trtype": "$TEST_TRANSPORT", 00:17:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.135 "adrfam": "ipv4", 00:17:38.135 "trsvcid": "$NVMF_PORT", 00:17:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.136 "hdgst": ${hdgst:-false}, 00:17:38.136 "ddgst": ${ddgst:-false} 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 } 00:17:38.136 EOF 00:17:38.136 )") 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:38.136 { 00:17:38.136 "params": { 00:17:38.136 "name": "Nvme$subsystem", 00:17:38.136 "trtype": "$TEST_TRANSPORT", 00:17:38.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.136 "adrfam": "ipv4", 00:17:38.136 "trsvcid": "$NVMF_PORT", 00:17:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.136 "hdgst": ${hdgst:-false}, 00:17:38.136 "ddgst": ${ddgst:-false} 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 } 00:17:38.136 EOF 00:17:38.136 )") 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:38.136 { 00:17:38.136 "params": { 00:17:38.136 "name": "Nvme$subsystem", 00:17:38.136 "trtype": "$TEST_TRANSPORT", 00:17:38.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.136 "adrfam": "ipv4", 00:17:38.136 "trsvcid": "$NVMF_PORT", 00:17:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.136 "hdgst": ${hdgst:-false}, 00:17:38.136 "ddgst": ${ddgst:-false} 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 } 00:17:38.136 EOF 00:17:38.136 )") 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3018969 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:38.136 "params": { 00:17:38.136 "name": "Nvme1", 00:17:38.136 "trtype": "tcp", 00:17:38.136 "traddr": "10.0.0.2", 00:17:38.136 "adrfam": "ipv4", 00:17:38.136 "trsvcid": "4420", 00:17:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.136 "hdgst": false, 00:17:38.136 "ddgst": false 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 }' 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:38.136 "params": { 00:17:38.136 "name": "Nvme1", 00:17:38.136 "trtype": "tcp", 00:17:38.136 "traddr": "10.0.0.2", 00:17:38.136 "adrfam": "ipv4", 00:17:38.136 "trsvcid": "4420", 00:17:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.136 "hdgst": false, 00:17:38.136 "ddgst": false 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 }' 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:38.136 "params": { 00:17:38.136 "name": "Nvme1", 00:17:38.136 "trtype": "tcp", 00:17:38.136 "traddr": "10.0.0.2", 00:17:38.136 "adrfam": "ipv4", 00:17:38.136 "trsvcid": "4420", 00:17:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.136 "hdgst": false, 00:17:38.136 "ddgst": false 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 }' 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:38.136 20:30:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:38.136 "params": { 00:17:38.136 "name": "Nvme1", 00:17:38.136 "trtype": "tcp", 00:17:38.136 "traddr": "10.0.0.2", 00:17:38.136 "adrfam": "ipv4", 00:17:38.136 "trsvcid": "4420", 00:17:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.136 "hdgst": false, 00:17:38.136 "ddgst": false 00:17:38.136 }, 00:17:38.136 "method": "bdev_nvme_attach_controller" 00:17:38.136 }' 00:17:38.136 [2024-05-13 20:30:54.063240] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:38.136 [2024-05-13 20:30:54.063289] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:38.136 [2024-05-13 20:30:54.065647] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:38.136 [2024-05-13 20:30:54.065648] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:38.136 [2024-05-13 20:30:54.065698] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-13 20:30:54.065697] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:38.136 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:38.136 [2024-05-13 20:30:54.066626] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:38.136 [2024-05-13 20:30:54.066668] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:38.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.396 [2024-05-13 20:30:54.217074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.396 [2024-05-13 20:30:54.260921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.396 [2024-05-13 20:30:54.269192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:38.396 [2024-05-13 20:30:54.310326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.396 [2024-05-13 20:30:54.312524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:38.657 [2024-05-13 20:30:54.359264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.657 [2024-05-13 20:30:54.360620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:38.657 [2024-05-13 20:30:54.408832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:38.657 Running I/O for 1 seconds... 00:17:38.657 Running I/O for 1 seconds... 00:17:38.657 Running I/O for 1 seconds... 00:17:38.917 Running I/O for 1 seconds... 00:17:39.859 00:17:39.859 Latency(us) 00:17:39.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.859 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:39.859 Nvme1n1 : 1.01 11652.09 45.52 0.00 0.00 10923.12 4860.59 16711.68 00:17:39.859 =================================================================================================================== 00:17:39.859 Total : 11652.09 45.52 0.00 0.00 10923.12 4860.59 16711.68 00:17:39.859 00:17:39.859 Latency(us) 00:17:39.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.859 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:39.859 Nvme1n1 : 1.00 193684.98 756.58 0.00 0.00 658.21 264.53 744.11 00:17:39.859 =================================================================================================================== 00:17:39.859 Total : 193684.98 756.58 0.00 0.00 658.21 264.53 744.11 00:17:39.859 00:17:39.859 Latency(us) 00:17:39.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.859 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:39.859 Nvme1n1 : 1.00 10886.13 42.52 0.00 0.00 11727.79 4450.99 25777.49 00:17:39.859 =================================================================================================================== 00:17:39.859 Total : 10886.13 42.52 0.00 0.00 11727.79 4450.99 25777.49 00:17:39.859 00:17:39.859 Latency(us) 00:17:39.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.859 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:39.859 Nvme1n1 : 1.00 14963.44 58.45 0.00 0.00 8531.86 4369.07 18022.40 00:17:39.859 =================================================================================================================== 00:17:39.859 Total : 14963.44 58.45 0.00 0.00 8531.86 4369.07 18022.40 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3018972 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3018975 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3018978 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.859 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.859 rmmod nvme_tcp 00:17:40.120 rmmod nvme_fabrics 00:17:40.120 rmmod nvme_keyring 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3018815 ']' 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3018815 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3018815 ']' 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3018815 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:40.120 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:40.121 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3018815 00:17:40.121 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:40.121 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:40.121 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3018815' 00:17:40.121 killing process with pid 3018815 00:17:40.121 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3018815 00:17:40.121 [2024-05-13 20:30:55.917733] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:40.121 20:30:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3018815 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.121 20:30:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.667 20:30:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:42.667 00:17:42.667 real 0m13.066s 00:17:42.667 user 0m18.943s 00:17:42.667 sys 0m7.071s 00:17:42.667 20:30:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:42.667 20:30:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.667 ************************************ 00:17:42.667 END TEST nvmf_bdev_io_wait 00:17:42.667 ************************************ 00:17:42.667 20:30:58 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:42.667 20:30:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:42.667 20:30:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:42.667 20:30:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.667 ************************************ 00:17:42.667 START TEST nvmf_queue_depth 00:17:42.667 ************************************ 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:42.667 * Looking for test storage... 00:17:42.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:42.667 20:30:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:50.819 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:50.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:50.819 Found net devices under 0000:31:00.0: cvl_0_0 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:50.819 Found net devices under 0000:31:00.1: cvl_0_1 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.819 20:31:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.819 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.819 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.819 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:17:50.819 00:17:50.819 --- 10.0.0.2 ping statistics --- 00:17:50.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.820 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:17:50.820 00:17:50.820 --- 10.0.0.1 ping statistics --- 00:17:50.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.820 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3023907 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3023907 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3023907 ']' 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.820 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.820 [2024-05-13 20:31:06.190781] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:50.820 [2024-05-13 20:31:06.190846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.820 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.820 [2024-05-13 20:31:06.285828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.820 [2024-05-13 20:31:06.379781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.820 [2024-05-13 20:31:06.379845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.820 [2024-05-13 20:31:06.379853] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.820 [2024-05-13 20:31:06.379860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.820 [2024-05-13 20:31:06.379866] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.820 [2024-05-13 20:31:06.379892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.080 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:51.080 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:51.080 20:31:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.080 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.080 20:31:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.080 20:31:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.081 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.081 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.081 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.081 [2024-05-13 20:31:07.022787] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 Malloc0 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 [2024-05-13 20:31:07.077877] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:51.341 [2024-05-13 20:31:07.078186] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3024231 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3024231 /var/tmp/bdevperf.sock 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3024231 ']' 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:51.341 [2024-05-13 20:31:07.129798] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:51.341 [2024-05-13 20:31:07.129858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024231 ] 00:17:51.341 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.341 [2024-05-13 20:31:07.200571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.341 [2024-05-13 20:31:07.273358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.283 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:52.283 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:52.283 20:31:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.283 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.283 20:31:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.283 NVMe0n1 00:17:52.283 20:31:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.283 20:31:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.283 Running I/O for 10 seconds... 00:18:02.286 00:18:02.286 Latency(us) 00:18:02.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.286 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:02.286 Verification LBA range: start 0x0 length 0x4000 00:18:02.286 NVMe0n1 : 10.05 11827.54 46.20 0.00 0.00 86296.65 7427.41 62914.56 00:18:02.286 =================================================================================================================== 00:18:02.286 Total : 11827.54 46.20 0.00 0.00 86296.65 7427.41 62914.56 00:18:02.286 0 00:18:02.286 20:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3024231 00:18:02.286 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3024231 ']' 00:18:02.286 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3024231 00:18:02.286 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:02.286 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:02.286 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3024231 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3024231' 00:18:02.548 killing process with pid 3024231 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3024231 00:18:02.548 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.548 00:18:02.548 Latency(us) 00:18:02.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.548 =================================================================================================================== 00:18:02.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3024231 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.548 rmmod nvme_tcp 00:18:02.548 rmmod nvme_fabrics 00:18:02.548 rmmod nvme_keyring 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3023907 ']' 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3023907 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3023907 ']' 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3023907 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:02.548 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3023907 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3023907' 00:18:02.809 killing process with pid 3023907 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3023907 00:18:02.809 [2024-05-13 20:31:18.492993] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3023907 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.809 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.810 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.810 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.810 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.810 20:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.810 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.810 20:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.361 20:31:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.361 00:18:05.361 real 0m22.485s 00:18:05.361 user 0m25.581s 00:18:05.361 sys 0m6.923s 00:18:05.361 20:31:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:05.361 20:31:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.361 ************************************ 00:18:05.361 END TEST nvmf_queue_depth 00:18:05.361 ************************************ 00:18:05.361 20:31:20 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:05.361 20:31:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:05.361 20:31:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:05.361 20:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.361 ************************************ 00:18:05.361 START TEST nvmf_target_multipath 00:18:05.361 ************************************ 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:05.361 * Looking for test storage... 00:18:05.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.361 20:31:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.506 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:13.507 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:13.507 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:13.507 Found net devices under 0000:31:00.0: cvl_0_0 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:13.507 Found net devices under 0000:31:00.1: cvl_0_1 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:13.507 20:31:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:13.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:18:13.507 00:18:13.507 --- 10.0.0.2 ping statistics --- 00:18:13.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.507 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:18:13.507 00:18:13.507 --- 10.0.0.1 ping statistics --- 00:18:13.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.507 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:13.507 only one NIC for nvmf test 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.507 rmmod nvme_tcp 00:18:13.507 rmmod nvme_fabrics 00:18:13.507 rmmod nvme_keyring 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.507 20:31:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.421 00:18:15.421 real 0m10.473s 00:18:15.421 user 0m2.276s 00:18:15.421 sys 0m6.098s 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:15.421 20:31:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.421 ************************************ 00:18:15.421 END TEST nvmf_target_multipath 00:18:15.421 ************************************ 00:18:15.421 20:31:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.421 20:31:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:15.421 20:31:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:15.421 20:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.421 ************************************ 00:18:15.421 START TEST nvmf_zcopy 00:18:15.421 ************************************ 00:18:15.421 20:31:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.683 * Looking for test storage... 00:18:15.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.683 20:31:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.684 20:31:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:23.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:23.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:23.854 Found net devices under 0000:31:00.0: cvl_0_0 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:23.854 Found net devices under 0000:31:00.1: cvl_0_1 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.854 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:18:23.855 00:18:23.855 --- 10.0.0.2 ping statistics --- 00:18:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.855 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:18:23.855 00:18:23.855 --- 10.0.0.1 ping statistics --- 00:18:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.855 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3035589 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3035589 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3035589 ']' 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:23.855 20:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:23.855 [2024-05-13 20:31:38.995804] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:18:23.855 [2024-05-13 20:31:38.995867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.855 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.855 [2024-05-13 20:31:39.094404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.855 [2024-05-13 20:31:39.187213] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.855 [2024-05-13 20:31:39.187276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.855 [2024-05-13 20:31:39.187285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.855 [2024-05-13 20:31:39.187292] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.855 [2024-05-13 20:31:39.187298] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.855 [2024-05-13 20:31:39.187333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.855 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.855 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:23.855 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.855 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.855 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 [2024-05-13 20:31:39.826449] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 [2024-05-13 20:31:39.846423] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:24.117 [2024-05-13 20:31:39.846776] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 malloc0 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:24.117 { 00:18:24.117 "params": { 00:18:24.117 "name": "Nvme$subsystem", 00:18:24.117 "trtype": "$TEST_TRANSPORT", 00:18:24.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.117 "adrfam": "ipv4", 00:18:24.117 "trsvcid": "$NVMF_PORT", 00:18:24.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.117 "hdgst": ${hdgst:-false}, 00:18:24.117 "ddgst": ${ddgst:-false} 00:18:24.117 }, 00:18:24.117 "method": "bdev_nvme_attach_controller" 00:18:24.117 } 00:18:24.117 EOF 00:18:24.117 )") 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:24.117 20:31:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:24.117 "params": { 00:18:24.117 "name": "Nvme1", 00:18:24.117 "trtype": "tcp", 00:18:24.117 "traddr": "10.0.0.2", 00:18:24.117 "adrfam": "ipv4", 00:18:24.118 "trsvcid": "4420", 00:18:24.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.118 "hdgst": false, 00:18:24.118 "ddgst": false 00:18:24.118 }, 00:18:24.118 "method": "bdev_nvme_attach_controller" 00:18:24.118 }' 00:18:24.118 [2024-05-13 20:31:39.929190] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:18:24.118 [2024-05-13 20:31:39.929253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035936 ] 00:18:24.118 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.118 [2024-05-13 20:31:40.003426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.378 [2024-05-13 20:31:40.083426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.378 Running I/O for 10 seconds... 00:18:34.388 00:18:34.388 Latency(us) 00:18:34.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.388 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:34.388 Verification LBA range: start 0x0 length 0x1000 00:18:34.388 Nvme1n1 : 10.01 9336.56 72.94 0.00 0.00 13657.97 1631.57 28835.84 00:18:34.388 =================================================================================================================== 00:18:34.388 Total : 9336.56 72.94 0.00 0.00 13657.97 1631.57 28835.84 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3037938 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.648 { 00:18:34.648 "params": { 00:18:34.648 "name": "Nvme$subsystem", 00:18:34.648 "trtype": "$TEST_TRANSPORT", 00:18:34.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.648 "adrfam": "ipv4", 00:18:34.648 "trsvcid": "$NVMF_PORT", 00:18:34.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.648 "hdgst": ${hdgst:-false}, 00:18:34.648 "ddgst": ${ddgst:-false} 00:18:34.648 }, 00:18:34.648 "method": "bdev_nvme_attach_controller" 00:18:34.648 } 00:18:34.648 EOF 00:18:34.648 )") 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:34.648 [2024-05-13 20:31:50.396527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.396558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:34.648 [2024-05-13 20:31:50.404506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.404515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 20:31:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.648 "params": { 00:18:34.648 "name": "Nvme1", 00:18:34.648 "trtype": "tcp", 00:18:34.648 "traddr": "10.0.0.2", 00:18:34.648 "adrfam": "ipv4", 00:18:34.648 "trsvcid": "4420", 00:18:34.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.648 "hdgst": false, 00:18:34.648 "ddgst": false 00:18:34.648 }, 00:18:34.648 "method": "bdev_nvme_attach_controller" 00:18:34.648 }' 00:18:34.648 [2024-05-13 20:31:50.412523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.412531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.417181] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:18:34.648 [2024-05-13 20:31:50.417229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037938 ] 00:18:34.648 [2024-05-13 20:31:50.420542] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.420550] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.428563] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.428570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.436584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.436592] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.444607] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.444614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.648 [2024-05-13 20:31:50.452627] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.452636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.460648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.460655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.468668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.468675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.476688] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.476696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.481297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.648 [2024-05-13 20:31:50.484710] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.484718] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.492730] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.492738] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.500750] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.500759] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.508770] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.508779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.516792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.516802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.524811] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.524821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.532831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.532840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.540850] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.540858] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.545347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.648 [2024-05-13 20:31:50.548869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.548877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.556891] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.556901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.564917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.564930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.572933] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.572941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.580956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.580964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.648 [2024-05-13 20:31:50.588974] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.648 [2024-05-13 20:31:50.588985] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.596996] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.597004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.605015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.605022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.613035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.613042] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.625086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.625101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.633099] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.633110] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.641111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.641120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.649133] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.649143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.657154] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.657164] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.665175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.665182] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.673195] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.673202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.681215] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.681222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.689236] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.689244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.697258] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.697265] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.705281] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.705290] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.713298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.713305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.721322] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.721330] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.729342] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.729350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.737365] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.737373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.745378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.745390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.753401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.753410] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.761421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.761429] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.769442] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.769449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.777463] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.777470] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.785483] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.785490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.793505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.793513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.801534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.801548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.809548] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.809556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 Running I/O for 5 seconds... 00:18:34.909 [2024-05-13 20:31:50.817569] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.817579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.828538] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.828556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.838057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.838075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.909 [2024-05-13 20:31:50.846860] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.909 [2024-05-13 20:31:50.846877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.855747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.855765] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.864796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.864813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.873555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.873573] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.882863] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.882881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.891805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.891821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.900945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.900961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.909952] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.909973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.918927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.918943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.927108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.927124] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.935606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.935624] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.944207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.944223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.952795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.952811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.961197] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.961214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.970355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.970371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.979386] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.979404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.988669] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.988685] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:50.997245] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:50.997261] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.006149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.006165] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.014659] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.014675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.023699] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.023715] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.032297] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.032317] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.041379] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.041394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.050070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.050086] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.059606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.059622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.068920] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.068937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.077163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.077179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.086363] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.086378] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.094551] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.094567] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.103370] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.103386] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.171 [2024-05-13 20:31:51.112233] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.171 [2024-05-13 20:31:51.112250] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.433 [2024-05-13 20:31:51.121572] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.433 [2024-05-13 20:31:51.121588] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.433 [2024-05-13 20:31:51.130923] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.433 [2024-05-13 20:31:51.130939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.433 [2024-05-13 20:31:51.139818] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.433 [2024-05-13 20:31:51.139834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.433 [2024-05-13 20:31:51.148519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.433 [2024-05-13 20:31:51.148535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.433 [2024-05-13 20:31:51.157795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.433 [2024-05-13 20:31:51.157811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.165827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.165842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.174667] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.174683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.183442] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.183458] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.191453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.191468] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.200601] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.200617] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.208668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.208684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.217411] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.217428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.226174] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.226191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.235200] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.235217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.244528] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.244545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.252547] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.252564] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.261367] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.261384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.270667] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.270684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.279220] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.279237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.288849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.288865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.297206] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.297223] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.305735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.305752] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.314817] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.314834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.323278] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.323294] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.332427] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.332444] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.341836] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.341852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.350991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.351008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.359805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.359821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.434 [2024-05-13 20:31:51.368939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.434 [2024-05-13 20:31:51.368957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.377752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.377769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.387099] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.387116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.395384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.395400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.403957] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.403973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.412588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.412604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.421798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.421815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.430341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.430357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.439226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.439242] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.447408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.447424] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.456005] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.456021] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.465160] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.465176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.473928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.473946] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.482535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.482552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.491650] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.491667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.500402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.500418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.509258] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.509275] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.519031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.519048] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.527363] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.527379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.536085] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.536101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.545239] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.545255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.553743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.553760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.562510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.562527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.571104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.571122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.580007] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.580024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.588510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.588527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.596975] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.596991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.605382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.605398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.613602] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.613617] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.623272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.623288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.696 [2024-05-13 20:31:51.631682] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.696 [2024-05-13 20:31:51.631698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.639774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.639793] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.648556] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.648573] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.657612] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.657629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.665966] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.665982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.675118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.675134] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.683485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.683501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.692308] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.692332] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.701790] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.701807] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.710842] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.710859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.719477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.719493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.728184] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.728200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.736766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.736785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.745883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.745900] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.755306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.755328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.763486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.763502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.772755] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.772772] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.780961] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.780977] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.790172] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.790188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.799126] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.799144] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.807755] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.807772] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.816489] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.816506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.825187] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.825204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.834563] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.834579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.843747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.843764] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.852594] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.852611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.860885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.860902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.869726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.869742] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.878905] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.878922] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.887529] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.887546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.959 [2024-05-13 20:31:51.896430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.959 [2024-05-13 20:31:51.896446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.905612] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.905633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.914655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.914672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.922684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.922699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.931474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.931490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.940937] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.940953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.949038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.949057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.958309] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.958330] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.967410] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.967427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.976170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.976186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.985415] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.985431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:51.994123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:51.994138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.003332] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.003348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.012591] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.012606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.022034] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.022050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.030989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.031004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.040046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.040061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.048750] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.048765] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.058162] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.058177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.066710] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.066726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.075770] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.075789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.221 [2024-05-13 20:31:52.084131] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.221 [2024-05-13 20:31:52.084147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.093500] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.093516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.101709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.101725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.110964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.110980] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.119850] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.119865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.129018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.129035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.137764] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.137781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.146499] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.146515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.155483] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.155499] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.222 [2024-05-13 20:31:52.164303] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.222 [2024-05-13 20:31:52.164325] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.172518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.172538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.181422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.181439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.189841] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.189860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.198581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.198597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.207652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.207669] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.216491] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.216506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.225816] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.225832] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.235185] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.235201] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.244540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.244559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.253129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.253145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.261758] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.261774] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.270563] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.270578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.279675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.279691] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.288183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.288199] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.297264] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.297281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.305388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.305403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.313732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.313752] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.323057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.323073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.484 [2024-05-13 20:31:52.331726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.484 [2024-05-13 20:31:52.331743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.340494] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.340510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.349153] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.349169] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.357925] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.357941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.366536] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.366552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.375136] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.375152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.383484] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.383499] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.392130] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.392146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.400889] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.400906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.410103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.410119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.418393] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.418409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.485 [2024-05-13 20:31:52.427365] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.485 [2024-05-13 20:31:52.427380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.435794] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.435811] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.444474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.444490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.453677] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.453693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.462531] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.462547] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.471155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.471171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.479862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.479879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.488927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.488943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.497581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.497597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.506825] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.506841] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.515781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.515797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.525058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.525074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.533979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.533995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.542809] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.542826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.551975] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.551991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.560057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.560073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.569268] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.569284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.578466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.578482] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.587060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.587076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.595864] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.595880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.604069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.604089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.613077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.613093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.621331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.621347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.629988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.630004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.639604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.639620] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.648346] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.648362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.657560] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.657577] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.666284] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.666300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.674853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.674869] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.684272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.684289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.752 [2024-05-13 20:31:52.693533] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.752 [2024-05-13 20:31:52.693549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.702243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.702260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.711552] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.711569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.720862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.720879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.729062] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.729081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.737756] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.737772] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.747113] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.747129] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.756444] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.756462] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.764516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.764532] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.773689] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.773706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.782469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.782485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.791333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.791348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.800595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.800612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.014 [2024-05-13 20:31:52.809976] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.014 [2024-05-13 20:31:52.809993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.818238] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.818254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.826690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.826706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.835932] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.835948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.845245] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.845260] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.853693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.853709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.863077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.863094] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.871464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.871480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.880035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.880050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.888042] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.888061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.897588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.897603] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.906898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.906914] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.915939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.915956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.924698] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.924716] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.933266] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.933283] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.942505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.942522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.015 [2024-05-13 20:31:52.951427] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.015 [2024-05-13 20:31:52.951443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:52.960594] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:52.960610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:52.969333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:52.969353] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:52.977467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:52.977483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:52.985812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:52.985829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:52.994964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:52.994980] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.004031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.004047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.013300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.013323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.021473] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.021488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.030287] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.030304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.038986] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.039003] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.048173] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.048190] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.056899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.056916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.065814] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.065830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.075143] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.075162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.084337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.084354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.092488] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.092504] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.101199] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.101215] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.110143] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.110159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.118389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.118405] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.127368] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.127385] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.135917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.135933] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.145120] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.145136] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.154253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.154270] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.162770] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.162787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.171377] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.171393] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.179586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.179603] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.188347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.188364] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.277 [2024-05-13 20:31:53.196832] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.277 [2024-05-13 20:31:53.196848] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.278 [2024-05-13 20:31:53.205958] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.278 [2024-05-13 20:31:53.205975] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.278 [2024-05-13 20:31:53.215424] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.278 [2024-05-13 20:31:53.215441] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.224747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.224764] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.233642] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.233658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.243321] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.243340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.252210] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.252227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.261571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.261588] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.270555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.270571] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.278834] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.278849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.287381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.287397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.296058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.296076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.305068] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.305085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.313726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.313742] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.323420] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.323437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.331530] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.331546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.340617] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.539 [2024-05-13 20:31:53.340633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.539 [2024-05-13 20:31:53.349186] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.349203] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.357861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.357878] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.366480] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.366497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.375496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.375513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.384842] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.384859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.393696] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.393713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.402371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.402388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.411531] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.411551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.420304] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.420326] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.429175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.429192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.438141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.438158] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.446812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.446830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.455520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.455537] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.464573] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.464589] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.472709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.472725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.540 [2024-05-13 20:31:53.481479] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.540 [2024-05-13 20:31:53.481496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.490523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.490540] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.499393] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.499409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.508618] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.508635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.516847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.516864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.525708] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.525724] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.534378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.534394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.543844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.543861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.552774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.552790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.561964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.561980] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.570655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.570671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.579660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.579681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.588396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.588412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.597036] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.597052] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.606440] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.606456] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.615139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.615155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.623685] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.623702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.632115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.632132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.641383] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.641400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.649546] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.649566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.658812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.658827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.667402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.667420] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.676419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.676435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.685554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.685570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.694616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.694632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.703194] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.703210] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.711666] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.711682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.720678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.720695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.729396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.729412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.802 [2024-05-13 20:31:53.738348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.802 [2024-05-13 20:31:53.738363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.746831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.746847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.755249] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.755264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.764134] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.764151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.772404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.772422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.781707] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.781723] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.790780] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.790797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.799298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.799319] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.808306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.808327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.817588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.817605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.826087] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.826103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.834701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.834717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.843381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.843397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.852114] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.852130] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.861292] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.861308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.869435] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.869451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.878143] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.878159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.887058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.887075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.896253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.896269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.905493] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.905509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.914233] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.914249] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.923769] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.923785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.931901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.931919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.940968] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.940984] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.949118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.949133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.957781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.957797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.967138] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.967154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.975820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.975836] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.984634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.984650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:53.993439] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:53.993454] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.064 [2024-05-13 20:31:54.002532] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.064 [2024-05-13 20:31:54.002547] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.011774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.011790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.020446] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.020462] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.029634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.029651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.038099] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.038118] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.047306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.047327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.055508] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.055524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.064285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.064300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.073396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.073412] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.082046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.082062] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.091190] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.091206] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.099867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.099882] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.108777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.108793] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.325 [2024-05-13 20:31:54.118059] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.325 [2024-05-13 20:31:54.118075] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.126131] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.126146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.134824] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.134840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.143588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.143604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.152173] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.152189] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.160703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.160719] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.170086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.170103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.179049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.179065] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.188249] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.188266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.196940] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.196956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.205695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.205711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.214388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.214404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.223139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.223155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.231536] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.231553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.240086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.240101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.248777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.248793] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.257621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.257638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.326 [2024-05-13 20:31:54.266241] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.326 [2024-05-13 20:31:54.266257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.274665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.274680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.283000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.283015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.291639] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.291655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.300214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.300229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.309336] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.309352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.318596] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.318612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.327232] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.327248] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.336422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.336438] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.345171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.345187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.353260] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.353276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.370051] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.370068] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.379157] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.379173] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.388025] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.388041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.396972] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.396988] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.405149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.405165] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.413753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.413772] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.588 [2024-05-13 20:31:54.422522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.588 [2024-05-13 20:31:54.422538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.430695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.430711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.440062] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.440078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.449031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.449047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.458213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.458230] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.466527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.466543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.475201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.475217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.483429] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.483445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.492141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.492157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.501383] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.501399] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.510515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.510531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.519414] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.519430] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.589 [2024-05-13 20:31:54.527950] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.589 [2024-05-13 20:31:54.527967] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.537033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.537049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.545835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.545851] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.554144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.554160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.562517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.562532] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.570681] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.570697] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.579555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.579575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.588686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.588702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.598255] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.598271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.606931] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.606950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.615925] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.615940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.623999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.624018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.632740] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.632758] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.640846] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.640866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.649457] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.649474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.851 [2024-05-13 20:31:54.657827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.851 [2024-05-13 20:31:54.657843] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.666566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.666582] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.675732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.675749] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.683823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.683842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.692930] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.692946] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.701886] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.701902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.710997] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.711013] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.719773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.719791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.728392] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.728408] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.737285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.737302] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.746106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.746126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.755139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.755155] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.764645] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.764661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.773659] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.773675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.782321] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.782338] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.852 [2024-05-13 20:31:54.791582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.852 [2024-05-13 20:31:54.791599] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.799722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.799741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.808638] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.808655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.817861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.817878] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.825917] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.825934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.834642] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.834659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.843543] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.843560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.852851] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.852868] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.862151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.862168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.871320] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.871336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.880475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.880492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.889467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.889483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.898733] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.898750] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.908018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.908034] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.916838] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.916858] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.926272] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.926289] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.934464] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.934484] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.943135] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.943151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.952396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.952413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.961037] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.961054] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.969675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.969695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.977891] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.977907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.113 [2024-05-13 20:31:54.986901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.113 [2024-05-13 20:31:54.986917] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:54.995683] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:54.995699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:55.004728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:55.004744] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:55.013364] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:55.013381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:55.021984] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:55.022000] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:55.030735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:55.030751] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:55.039499] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:55.039517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.114 [2024-05-13 20:31:55.048529] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.114 [2024-05-13 20:31:55.048546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.057133] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.057150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.065966] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.065982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.075197] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.075213] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.083914] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.083930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.092637] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.092653] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.102058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.102074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.110374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.110392] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.119555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.119572] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.128767] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.128784] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.136931] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.136947] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.145660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.145677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.154154] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.154171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.162814] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.162831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.172027] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.172044] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.180722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.180738] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.189598] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.189614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.198201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.198217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.206979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.206995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.216328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.216344] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.225298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.225320] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.234607] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.234624] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.243503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.243519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.252212] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.252228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.260837] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.260853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.269694] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.269710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.278626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.278643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.287746] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.287762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.297213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.297229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.305523] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.305540] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 [2024-05-13 20:31:55.314403] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.375 [2024-05-13 20:31:55.314419] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.322762] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.322778] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.331835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.331851] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.340787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.340803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.349850] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.349867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.359163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.359180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.367990] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.368006] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.376824] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.376841] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.385956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.385972] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.394168] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.394183] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.403267] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.403283] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.411522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.411538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.420288] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.420304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.429447] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.429463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.437483] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.437499] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.446713] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.446729] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.455476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.455496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.464867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.464883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.474317] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.474333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.482216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.482233] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.491789] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.491805] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.499927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.499943] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.509004] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.509020] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.517676] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.517691] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.526815] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.526830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.536123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.536139] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.545023] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.545039] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.553643] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.553659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.562660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.562677] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.637 [2024-05-13 20:31:55.571926] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.637 [2024-05-13 20:31:55.571942] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.580979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.580995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.589712] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.589728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.598463] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.598478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.607697] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.607713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.616474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.616489] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.625178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.625194] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.633743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.633759] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.642266] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.642282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.651502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.651518] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.659838] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.659852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.668888] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.668904] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.677612] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.677627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.686942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.686958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.695626] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.695642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.704213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.704228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.713388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.713404] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.722164] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.722180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.731141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.731157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.740326] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.740341] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.749485] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.749505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.758204] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.758220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.766651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.766667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.775340] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.775356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.783506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.783521] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.792188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.792204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.800722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.800737] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.809684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.809700] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.817863] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.817884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.826580] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.826596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 00:18:39.899 Latency(us) 00:18:39.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.899 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:39.899 Nvme1n1 : 5.01 18150.15 141.80 0.00 0.00 7044.25 3167.57 16602.45 00:18:39.899 =================================================================================================================== 00:18:39.899 Total : 18150.15 141.80 0.00 0.00 7044.25 3167.57 16602.45 00:18:39.899 [2024-05-13 20:31:55.832509] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.832523] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.899 [2024-05-13 20:31:55.840552] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.899 [2024-05-13 20:31:55.840565] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.848572] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.848583] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.856596] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.856606] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.864616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.864625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.872635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.872645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.880654] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.880668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.888673] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.888682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.896693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.896701] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.904714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.904723] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.912735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.912742] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.920757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.920766] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.928777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.928785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.936798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.936806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.944820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.944828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.952839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.952847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 [2024-05-13 20:31:55.960859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.162 [2024-05-13 20:31:55.960866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3037938) - No such process 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3037938 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 delay0 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.162 20:31:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:40.162 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.162 [2024-05-13 20:31:56.071558] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:46.749 Initializing NVMe Controllers 00:18:46.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.749 Initialization complete. Launching workers. 00:18:46.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 217 00:18:46.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 500, failed to submit 37 00:18:46.749 success 346, unsuccess 154, failed 0 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.749 rmmod nvme_tcp 00:18:46.749 rmmod nvme_fabrics 00:18:46.749 rmmod nvme_keyring 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3035589 ']' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3035589 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3035589 ']' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3035589 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3035589 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3035589' 00:18:46.749 killing process with pid 3035589 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3035589 00:18:46.749 [2024-05-13 20:32:02.285621] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3035589 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.749 20:32:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.665 20:32:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.665 00:18:48.665 real 0m33.153s 00:18:48.665 user 0m44.002s 00:18:48.665 sys 0m9.971s 00:18:48.665 20:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.665 20:32:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:48.665 ************************************ 00:18:48.665 END TEST nvmf_zcopy 00:18:48.665 ************************************ 00:18:48.665 20:32:04 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:48.665 20:32:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:48.665 20:32:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.665 20:32:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.665 ************************************ 00:18:48.665 START TEST nvmf_nmic 00:18:48.665 ************************************ 00:18:48.665 20:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:48.928 * Looking for test storage... 00:18:48.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.928 20:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.076 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:57.077 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:57.077 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:57.077 Found net devices under 0000:31:00.0: cvl_0_0 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:57.077 Found net devices under 0000:31:00.1: cvl_0_1 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:57.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:18:57.077 00:18:57.077 --- 10.0.0.2 ping statistics --- 00:18:57.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.077 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:18:57.077 00:18:57.077 --- 10.0.0.1 ping statistics --- 00:18:57.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.077 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3044955 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3044955 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3044955 ']' 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:57.077 20:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.077 [2024-05-13 20:32:13.018071] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:18:57.077 [2024-05-13 20:32:13.018119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.339 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.339 [2024-05-13 20:32:13.091656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.339 [2024-05-13 20:32:13.159201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.339 [2024-05-13 20:32:13.159241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.339 [2024-05-13 20:32:13.159248] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.339 [2024-05-13 20:32:13.159255] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.339 [2024-05-13 20:32:13.159261] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.339 [2024-05-13 20:32:13.159430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.339 [2024-05-13 20:32:13.159544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.339 [2024-05-13 20:32:13.159666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.339 [2024-05-13 20:32:13.159668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:57.910 [2024-05-13 20:32:13.836916] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.910 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 Malloc0 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 [2024-05-13 20:32:13.896099] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:58.170 [2024-05-13 20:32:13.896376] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:58.170 test case1: single bdev can't be used in multiple subsystems 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 [2024-05-13 20:32:13.932241] bdev.c:8011:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:58.170 [2024-05-13 20:32:13.932258] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:58.170 [2024-05-13 20:32:13.932266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.170 request: 00:18:58.170 { 00:18:58.170 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:58.170 "namespace": { 00:18:58.170 "bdev_name": "Malloc0", 00:18:58.170 "no_auto_visible": false 00:18:58.170 }, 00:18:58.170 "method": "nvmf_subsystem_add_ns", 00:18:58.170 "req_id": 1 00:18:58.170 } 00:18:58.170 Got JSON-RPC error response 00:18:58.170 response: 00:18:58.170 { 00:18:58.170 "code": -32602, 00:18:58.170 "message": "Invalid parameters" 00:18:58.170 } 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:58.170 Adding namespace failed - expected result. 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:58.170 test case2: host connect to nvmf target in multiple paths 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:58.170 [2024-05-13 20:32:13.944385] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.170 20:32:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:00.083 20:32:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:01.470 20:32:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:01.470 20:32:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:01.470 20:32:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.470 20:32:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:01.470 20:32:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:03.438 20:32:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:03.438 [global] 00:19:03.438 thread=1 00:19:03.438 invalidate=1 00:19:03.438 rw=write 00:19:03.438 time_based=1 00:19:03.438 runtime=1 00:19:03.438 ioengine=libaio 00:19:03.438 direct=1 00:19:03.438 bs=4096 00:19:03.438 iodepth=1 00:19:03.438 norandommap=0 00:19:03.438 numjobs=1 00:19:03.438 00:19:03.438 verify_dump=1 00:19:03.438 verify_backlog=512 00:19:03.438 verify_state_save=0 00:19:03.438 do_verify=1 00:19:03.438 verify=crc32c-intel 00:19:03.438 [job0] 00:19:03.438 filename=/dev/nvme0n1 00:19:03.438 Could not set queue depth (nvme0n1) 00:19:03.725 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:03.725 fio-3.35 00:19:03.725 Starting 1 thread 00:19:05.114 00:19:05.114 job0: (groupid=0, jobs=1): err= 0: pid=3046327: Mon May 13 20:32:20 2024 00:19:05.114 read: IOPS=554, BW=2218KiB/s (2271kB/s)(2220KiB/1001msec) 00:19:05.114 slat (nsec): min=6260, max=62095, avg=25231.54, stdev=6843.24 00:19:05.114 clat (usec): min=425, max=1612, avg=801.48, stdev=120.76 00:19:05.114 lat (usec): min=441, max=1638, avg=826.71, stdev=122.21 00:19:05.114 clat percentiles (usec): 00:19:05.114 | 1.00th=[ 498], 5.00th=[ 594], 10.00th=[ 635], 20.00th=[ 693], 00:19:05.114 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 824], 60.00th=[ 857], 00:19:05.114 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 963], 00:19:05.114 | 99.00th=[ 1004], 99.50th=[ 1045], 99.90th=[ 1614], 99.95th=[ 1614], 00:19:05.114 | 99.99th=[ 1614] 00:19:05.114 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:05.114 slat (nsec): min=8602, max=68523, avg=30095.54, stdev=9964.60 00:19:05.114 clat (usec): min=137, max=1212, avg=487.52, stdev=111.00 00:19:05.114 lat (usec): min=160, max=1251, avg=517.62, stdev=115.30 00:19:05.114 clat percentiles (usec): 00:19:05.114 | 1.00th=[ 231], 5.00th=[ 306], 10.00th=[ 351], 20.00th=[ 396], 00:19:05.114 | 30.00th=[ 429], 40.00th=[ 474], 50.00th=[ 494], 60.00th=[ 515], 00:19:05.114 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 619], 95.00th=[ 652], 00:19:05.114 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 1188], 99.95th=[ 1221], 00:19:05.114 | 99.99th=[ 1221] 00:19:05.114 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.114 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.114 lat (usec) : 250=1.08%, 500=34.20%, 750=40.41%, 1000=23.62% 00:19:05.114 lat (msec) : 2=0.70% 00:19:05.114 cpu : usr=3.30%, sys=5.80%, ctx=1579, majf=0, minf=1 00:19:05.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 issued rwts: total=555,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.114 00:19:05.114 Run status group 0 (all jobs): 00:19:05.114 READ: bw=2218KiB/s (2271kB/s), 2218KiB/s-2218KiB/s (2271kB/s-2271kB/s), io=2220KiB (2273kB), run=1001-1001msec 00:19:05.114 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:19:05.114 00:19:05.114 Disk stats (read/write): 00:19:05.114 nvme0n1: ios=562/898, merge=0/0, ticks=724/339, in_queue=1063, util=96.69% 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.114 rmmod nvme_tcp 00:19:05.114 rmmod nvme_fabrics 00:19:05.114 rmmod nvme_keyring 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:05.114 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3044955 ']' 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3044955 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3044955 ']' 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3044955 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3044955 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3044955' 00:19:05.115 killing process with pid 3044955 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3044955 00:19:05.115 [2024-05-13 20:32:20.910244] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:05.115 20:32:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3044955 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.376 20:32:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.288 20:32:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:07.288 00:19:07.288 real 0m18.593s 00:19:07.288 user 0m47.989s 00:19:07.288 sys 0m7.169s 00:19:07.288 20:32:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:07.288 20:32:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:07.288 ************************************ 00:19:07.288 END TEST nvmf_nmic 00:19:07.288 ************************************ 00:19:07.288 20:32:23 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:07.288 20:32:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:07.288 20:32:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:07.288 20:32:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:07.288 ************************************ 00:19:07.288 START TEST nvmf_fio_target 00:19:07.288 ************************************ 00:19:07.288 20:32:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:07.549 * Looking for test storage... 00:19:07.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.549 20:32:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:15.696 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:15.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:15.696 Found net devices under 0000:31:00.0: cvl_0_0 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:15.696 Found net devices under 0000:31:00.1: cvl_0_1 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:15.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:19:15.696 00:19:15.696 --- 10.0.0.2 ping statistics --- 00:19:15.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.696 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:19:15.696 00:19:15.696 --- 10.0.0.1 ping statistics --- 00:19:15.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.696 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:15.696 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3051241 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3051241 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3051241 ']' 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:15.697 20:32:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.958 [2024-05-13 20:32:31.673432] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:19:15.958 [2024-05-13 20:32:31.673495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.958 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.958 [2024-05-13 20:32:31.753766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.958 [2024-05-13 20:32:31.827017] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.958 [2024-05-13 20:32:31.827061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.958 [2024-05-13 20:32:31.827069] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.958 [2024-05-13 20:32:31.827076] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.958 [2024-05-13 20:32:31.827081] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.958 [2024-05-13 20:32:31.827218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.958 [2024-05-13 20:32:31.827335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.958 [2024-05-13 20:32:31.827448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.958 [2024-05-13 20:32:31.827451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.531 20:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:16.531 20:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:16.531 20:32:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.531 20:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.531 20:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.792 20:32:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.792 20:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:16.792 [2024-05-13 20:32:32.638393] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.792 20:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.054 20:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:17.054 20:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.315 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:17.315 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.315 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:17.315 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.576 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:17.576 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:17.837 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.837 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:17.837 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.097 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:18.097 20:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.357 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:18.357 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:18.357 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:18.618 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:18.618 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.878 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:18.878 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:18.878 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.138 [2024-05-13 20:32:34.915957] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:19.138 [2024-05-13 20:32:34.916245] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.138 20:32:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:19.396 20:32:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:19.396 20:32:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:21.304 20:32:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:21.304 20:32:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:21.304 20:32:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.304 20:32:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:21.304 20:32:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:21.304 20:32:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:23.247 20:32:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:23.247 [global] 00:19:23.247 thread=1 00:19:23.247 invalidate=1 00:19:23.247 rw=write 00:19:23.247 time_based=1 00:19:23.247 runtime=1 00:19:23.247 ioengine=libaio 00:19:23.247 direct=1 00:19:23.247 bs=4096 00:19:23.247 iodepth=1 00:19:23.247 norandommap=0 00:19:23.247 numjobs=1 00:19:23.247 00:19:23.247 verify_dump=1 00:19:23.247 verify_backlog=512 00:19:23.247 verify_state_save=0 00:19:23.247 do_verify=1 00:19:23.247 verify=crc32c-intel 00:19:23.247 [job0] 00:19:23.247 filename=/dev/nvme0n1 00:19:23.247 [job1] 00:19:23.247 filename=/dev/nvme0n2 00:19:23.247 [job2] 00:19:23.247 filename=/dev/nvme0n3 00:19:23.247 [job3] 00:19:23.247 filename=/dev/nvme0n4 00:19:23.247 Could not set queue depth (nvme0n1) 00:19:23.247 Could not set queue depth (nvme0n2) 00:19:23.247 Could not set queue depth (nvme0n3) 00:19:23.247 Could not set queue depth (nvme0n4) 00:19:23.508 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.508 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.508 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.508 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.508 fio-3.35 00:19:23.508 Starting 4 threads 00:19:24.909 00:19:24.909 job0: (groupid=0, jobs=1): err= 0: pid=3053098: Mon May 13 20:32:40 2024 00:19:24.909 read: IOPS=15, BW=63.7KiB/s (65.3kB/s)(64.0KiB/1004msec) 00:19:24.909 slat (nsec): min=24345, max=46404, avg=26904.25, stdev=6335.26 00:19:24.909 clat (usec): min=1355, max=42573, avg=39502.21, stdev=10174.20 00:19:24.909 lat (usec): min=1394, max=42619, avg=39529.11, stdev=10171.02 00:19:24.909 clat percentiles (usec): 00:19:24.909 | 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[41681], 20.00th=[41681], 00:19:24.909 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:24.909 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:19:24.909 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:24.909 | 99.99th=[42730] 00:19:24.909 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:24.909 slat (nsec): min=9842, max=52068, avg=31264.29, stdev=7646.97 00:19:24.909 clat (usec): min=363, max=979, avg=685.95, stdev=115.55 00:19:24.909 lat (usec): min=373, max=1012, avg=717.21, stdev=118.19 00:19:24.909 clat percentiles (usec): 00:19:24.909 | 1.00th=[ 424], 5.00th=[ 478], 10.00th=[ 553], 20.00th=[ 578], 00:19:24.909 | 30.00th=[ 611], 40.00th=[ 660], 50.00th=[ 693], 60.00th=[ 725], 00:19:24.909 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 873], 00:19:24.909 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 979], 99.95th=[ 979], 00:19:24.909 | 99.99th=[ 979] 00:19:24.909 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.909 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.909 lat (usec) : 500=6.63%, 750=59.09%, 1000=31.25% 00:19:24.909 lat (msec) : 2=0.19%, 50=2.84% 00:19:24.909 cpu : usr=0.70%, sys=1.60%, ctx=529, majf=0, minf=1 00:19:24.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.910 job1: (groupid=0, jobs=1): err= 0: pid=3053100: Mon May 13 20:32:40 2024 00:19:24.910 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1007msec) 00:19:24.910 slat (nsec): min=24913, max=25494, avg=25222.06, stdev=149.07 00:19:24.910 clat (usec): min=1134, max=43060, avg=39536.22, stdev=10254.45 00:19:24.910 lat (usec): min=1159, max=43085, avg=39561.44, stdev=10254.47 00:19:24.910 clat percentiles (usec): 00:19:24.910 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[40633], 20.00th=[41681], 00:19:24.910 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:24.910 | 70.00th=[42206], 80.00th=[42206], 90.00th=[43254], 95.00th=[43254], 00:19:24.910 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:24.910 | 99.99th=[43254] 00:19:24.910 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:19:24.910 slat (nsec): min=9880, max=57795, avg=31633.75, stdev=9097.11 00:19:24.910 clat (usec): min=331, max=1000, avg=690.63, stdev=118.58 00:19:24.910 lat (usec): min=342, max=1039, avg=722.26, stdev=121.86 00:19:24.910 clat percentiles (usec): 00:19:24.910 | 1.00th=[ 433], 5.00th=[ 486], 10.00th=[ 529], 20.00th=[ 586], 00:19:24.910 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 725], 00:19:24.910 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 881], 00:19:24.910 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1004], 99.95th=[ 1004], 00:19:24.910 | 99.99th=[ 1004] 00:19:24.910 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.910 lat (usec) : 500=6.44%, 750=60.23%, 1000=30.11% 00:19:24.910 lat (msec) : 2=0.38%, 50=2.84% 00:19:24.910 cpu : usr=1.49%, sys=0.80%, ctx=529, majf=0, minf=1 00:19:24.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.910 job2: (groupid=0, jobs=1): err= 0: pid=3053101: Mon May 13 20:32:40 2024 00:19:24.910 read: IOPS=685, BW=2741KiB/s (2807kB/s)(2744KiB/1001msec) 00:19:24.910 slat (nsec): min=6095, max=57901, avg=23453.80, stdev=7447.81 00:19:24.910 clat (usec): min=254, max=1213, avg=690.58, stdev=113.23 00:19:24.910 lat (usec): min=260, max=1239, avg=714.03, stdev=115.09 00:19:24.910 clat percentiles (usec): 00:19:24.910 | 1.00th=[ 392], 5.00th=[ 486], 10.00th=[ 545], 20.00th=[ 586], 00:19:24.910 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 742], 00:19:24.910 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 816], 95.00th=[ 840], 00:19:24.910 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 1221], 99.95th=[ 1221], 00:19:24.910 | 99.99th=[ 1221] 00:19:24.910 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:24.910 slat (nsec): min=8894, max=65269, avg=30020.56, stdev=8980.19 00:19:24.910 clat (usec): min=135, max=795, avg=456.02, stdev=120.94 00:19:24.910 lat (usec): min=146, max=827, avg=486.04, stdev=123.45 00:19:24.910 clat percentiles (usec): 00:19:24.910 | 1.00th=[ 184], 5.00th=[ 253], 10.00th=[ 285], 20.00th=[ 355], 00:19:24.910 | 30.00th=[ 396], 40.00th=[ 424], 50.00th=[ 465], 60.00th=[ 490], 00:19:24.910 | 70.00th=[ 519], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 660], 00:19:24.910 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 799], 99.95th=[ 799], 00:19:24.910 | 99.99th=[ 799] 00:19:24.910 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.910 lat (usec) : 250=2.63%, 500=38.01%, 750=44.56%, 1000=14.68% 00:19:24.910 lat (msec) : 2=0.12% 00:19:24.910 cpu : usr=3.50%, sys=6.10%, ctx=1710, majf=0, minf=1 00:19:24.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 issued rwts: total=686,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.910 job3: (groupid=0, jobs=1): err= 0: pid=3053102: Mon May 13 20:32:40 2024 00:19:24.910 read: IOPS=415, BW=1661KiB/s (1701kB/s)(1724KiB/1038msec) 00:19:24.910 slat (nsec): min=6122, max=44735, avg=24389.09, stdev=6457.41 00:19:24.910 clat (usec): min=272, max=42854, avg=1600.19, stdev=5920.27 00:19:24.910 lat (usec): min=279, max=42879, avg=1624.58, stdev=5920.48 00:19:24.910 clat percentiles (usec): 00:19:24.910 | 1.00th=[ 322], 5.00th=[ 510], 10.00th=[ 570], 20.00th=[ 660], 00:19:24.910 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 791], 00:19:24.910 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 955], 00:19:24.910 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:24.910 | 99.99th=[42730] 00:19:24.910 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:19:24.910 slat (nsec): min=9056, max=53271, avg=32472.53, stdev=8318.23 00:19:24.910 clat (usec): min=191, max=878, avg=609.99, stdev=129.77 00:19:24.910 lat (usec): min=201, max=912, avg=642.47, stdev=133.07 00:19:24.910 clat percentiles (usec): 00:19:24.910 | 1.00th=[ 243], 5.00th=[ 355], 10.00th=[ 437], 20.00th=[ 510], 00:19:24.910 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 627], 60.00th=[ 660], 00:19:24.910 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:19:24.910 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 881], 99.95th=[ 881], 00:19:24.910 | 99.99th=[ 881] 00:19:24.910 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.910 lat (usec) : 250=0.85%, 500=10.50%, 750=57.69%, 1000=29.80% 00:19:24.910 lat (msec) : 2=0.21%, 50=0.95% 00:19:24.910 cpu : usr=2.51%, sys=2.80%, ctx=944, majf=0, minf=1 00:19:24.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.910 issued rwts: total=431,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.910 00:19:24.910 Run status group 0 (all jobs): 00:19:24.910 READ: bw=4428KiB/s (4534kB/s), 63.6KiB/s-2741KiB/s (65.1kB/s-2807kB/s), io=4596KiB (4706kB), run=1001-1038msec 00:19:24.910 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1038msec 00:19:24.910 00:19:24.910 Disk stats (read/write): 00:19:24.910 nvme0n1: ios=34/512, merge=0/0, ticks=1270/336, in_queue=1606, util=83.97% 00:19:24.910 nvme0n2: ios=57/512, merge=0/0, ticks=534/332, in_queue=866, util=91.11% 00:19:24.910 nvme0n3: ios=569/926, merge=0/0, ticks=412/341, in_queue=753, util=95.13% 00:19:24.910 nvme0n4: ios=448/512, merge=0/0, ticks=1321/233, in_queue=1554, util=94.22% 00:19:24.910 20:32:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:24.910 [global] 00:19:24.910 thread=1 00:19:24.910 invalidate=1 00:19:24.910 rw=randwrite 00:19:24.910 time_based=1 00:19:24.910 runtime=1 00:19:24.910 ioengine=libaio 00:19:24.910 direct=1 00:19:24.910 bs=4096 00:19:24.910 iodepth=1 00:19:24.910 norandommap=0 00:19:24.910 numjobs=1 00:19:24.910 00:19:24.910 verify_dump=1 00:19:24.910 verify_backlog=512 00:19:24.910 verify_state_save=0 00:19:24.910 do_verify=1 00:19:24.910 verify=crc32c-intel 00:19:24.910 [job0] 00:19:24.910 filename=/dev/nvme0n1 00:19:24.910 [job1] 00:19:24.910 filename=/dev/nvme0n2 00:19:24.910 [job2] 00:19:24.910 filename=/dev/nvme0n3 00:19:24.910 [job3] 00:19:24.910 filename=/dev/nvme0n4 00:19:24.910 Could not set queue depth (nvme0n1) 00:19:24.910 Could not set queue depth (nvme0n2) 00:19:24.910 Could not set queue depth (nvme0n3) 00:19:24.910 Could not set queue depth (nvme0n4) 00:19:25.180 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.180 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.180 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.180 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.180 fio-3.35 00:19:25.180 Starting 4 threads 00:19:26.586 00:19:26.586 job0: (groupid=0, jobs=1): err= 0: pid=3053625: Mon May 13 20:32:42 2024 00:19:26.586 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:19:26.586 slat (nsec): min=24942, max=42700, avg=26258.62, stdev=4387.27 00:19:26.586 clat (usec): min=1201, max=43028, avg=39626.77, stdev=10255.26 00:19:26.586 lat (usec): min=1244, max=43053, avg=39653.02, stdev=10250.88 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[41681], 00:19:26.586 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:26.586 | 70.00th=[42206], 80.00th=[42206], 90.00th=[43254], 95.00th=[43254], 00:19:26.586 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:26.586 | 99.99th=[43254] 00:19:26.586 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:26.586 slat (nsec): min=9327, max=63925, avg=29312.06, stdev=9046.56 00:19:26.586 clat (usec): min=148, max=1034, avg=682.87, stdev=126.53 00:19:26.586 lat (usec): min=160, max=1066, avg=712.18, stdev=130.50 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 310], 5.00th=[ 441], 10.00th=[ 537], 20.00th=[ 594], 00:19:26.586 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[ 685], 60.00th=[ 717], 00:19:26.586 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 832], 95.00th=[ 873], 00:19:26.586 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1037], 99.95th=[ 1037], 00:19:26.586 | 99.99th=[ 1037] 00:19:26.586 bw ( KiB/s): min= 4096, max= 4096, per=50.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.586 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.586 lat (usec) : 250=0.57%, 500=7.20%, 750=61.93%, 1000=26.89% 00:19:26.586 lat (msec) : 2=0.57%, 50=2.84% 00:19:26.586 cpu : usr=0.90%, sys=1.30%, ctx=529, majf=0, minf=1 00:19:26.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.586 job1: (groupid=0, jobs=1): err= 0: pid=3053626: Mon May 13 20:32:42 2024 00:19:26.586 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1007msec) 00:19:26.586 slat (nsec): min=24639, max=25551, avg=25277.81, stdev=204.80 00:19:26.586 clat (usec): min=1136, max=43039, avg=39508.64, stdev=10239.78 00:19:26.586 lat (usec): min=1162, max=43064, avg=39533.92, stdev=10239.77 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[41681], 00:19:26.586 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:26.586 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:19:26.586 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:26.586 | 99.99th=[43254] 00:19:26.586 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:19:26.586 slat (nsec): min=9721, max=57971, avg=29005.84, stdev=8821.69 00:19:26.586 clat (usec): min=344, max=1090, avg=694.45, stdev=125.49 00:19:26.586 lat (usec): min=361, max=1112, avg=723.45, stdev=128.43 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 400], 5.00th=[ 474], 10.00th=[ 537], 20.00th=[ 586], 00:19:26.586 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 734], 00:19:26.586 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 889], 00:19:26.586 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:19:26.586 | 99.99th=[ 1090] 00:19:26.586 bw ( KiB/s): min= 4096, max= 4096, per=50.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.586 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.586 lat (usec) : 500=6.63%, 750=57.39%, 1000=32.20% 00:19:26.586 lat (msec) : 2=0.95%, 50=2.84% 00:19:26.586 cpu : usr=0.89%, sys=1.29%, ctx=529, majf=0, minf=1 00:19:26.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.586 job2: (groupid=0, jobs=1): err= 0: pid=3053627: Mon May 13 20:32:42 2024 00:19:26.586 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:19:26.586 slat (nsec): min=9464, max=28212, avg=25641.62, stdev=5213.61 00:19:26.586 clat (usec): min=1058, max=43022, avg=39569.58, stdev=10278.37 00:19:26.586 lat (usec): min=1068, max=43049, avg=39595.22, stdev=10282.72 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41681], 20.00th=[41681], 00:19:26.586 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:26.586 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:19:26.586 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:26.586 | 99.99th=[43254] 00:19:26.586 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:19:26.586 slat (nsec): min=9205, max=83165, avg=33799.09, stdev=7945.46 00:19:26.586 clat (usec): min=315, max=1010, avg=679.12, stdev=136.63 00:19:26.586 lat (usec): min=327, max=1045, avg=712.92, stdev=138.86 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 363], 5.00th=[ 445], 10.00th=[ 494], 20.00th=[ 562], 00:19:26.586 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 725], 00:19:26.586 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 865], 95.00th=[ 898], 00:19:26.586 | 99.00th=[ 947], 99.50th=[ 996], 99.90th=[ 1012], 99.95th=[ 1012], 00:19:26.586 | 99.99th=[ 1012] 00:19:26.586 bw ( KiB/s): min= 4096, max= 4096, per=50.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.586 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.586 lat (usec) : 500=10.61%, 750=56.44%, 1000=29.55% 00:19:26.586 lat (msec) : 2=0.57%, 50=2.84% 00:19:26.586 cpu : usr=1.30%, sys=2.10%, ctx=529, majf=0, minf=1 00:19:26.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.586 job3: (groupid=0, jobs=1): err= 0: pid=3053628: Mon May 13 20:32:42 2024 00:19:26.586 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:19:26.586 slat (nsec): min=9697, max=25709, avg=24390.94, stdev=3789.97 00:19:26.586 clat (usec): min=1155, max=42993, avg=39771.59, stdev=9959.11 00:19:26.586 lat (usec): min=1164, max=43019, avg=39795.98, stdev=9962.89 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41681], 20.00th=[42206], 00:19:26.586 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:26.586 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:19:26.586 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:26.586 | 99.99th=[43254] 00:19:26.586 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:19:26.586 slat (nsec): min=9482, max=52858, avg=27319.09, stdev=10325.95 00:19:26.586 clat (usec): min=175, max=1094, avg=622.40, stdev=164.30 00:19:26.586 lat (usec): min=185, max=1125, avg=649.72, stdev=168.00 00:19:26.586 clat percentiles (usec): 00:19:26.586 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 486], 00:19:26.586 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 627], 00:19:26.586 | 70.00th=[ 717], 80.00th=[ 791], 90.00th=[ 857], 95.00th=[ 930], 00:19:26.586 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1090], 00:19:26.586 | 99.99th=[ 1090] 00:19:26.586 bw ( KiB/s): min= 4096, max= 4096, per=50.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:26.586 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:26.586 lat (usec) : 250=0.38%, 500=21.55%, 750=51.04%, 1000=22.68% 00:19:26.586 lat (msec) : 2=1.32%, 50=3.02% 00:19:26.586 cpu : usr=0.79%, sys=1.38%, ctx=530, majf=0, minf=1 00:19:26.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:26.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.586 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:26.586 00:19:26.586 Run status group 0 (all jobs): 00:19:26.586 READ: bw=257KiB/s (263kB/s), 63.6KiB/s-67.1KiB/s (65.1kB/s-68.7kB/s), io=260KiB (266kB), run=1003-1013msec 00:19:26.586 WRITE: bw=8087KiB/s (8281kB/s), 2022KiB/s-2042KiB/s (2070kB/s-2091kB/s), io=8192KiB (8389kB), run=1003-1013msec 00:19:26.586 00:19:26.586 Disk stats (read/write): 00:19:26.586 nvme0n1: ios=34/512, merge=0/0, ticks=1314/330, in_queue=1644, util=84.77% 00:19:26.586 nvme0n2: ios=53/512, merge=0/0, ticks=514/333, in_queue=847, util=91.64% 00:19:26.586 nvme0n3: ios=34/512, merge=0/0, ticks=1345/251, in_queue=1596, util=92.41% 00:19:26.586 nvme0n4: ios=76/512, merge=0/0, ticks=642/305, in_queue=947, util=97.33% 00:19:26.587 20:32:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:26.587 [global] 00:19:26.587 thread=1 00:19:26.587 invalidate=1 00:19:26.587 rw=write 00:19:26.587 time_based=1 00:19:26.587 runtime=1 00:19:26.587 ioengine=libaio 00:19:26.587 direct=1 00:19:26.587 bs=4096 00:19:26.587 iodepth=128 00:19:26.587 norandommap=0 00:19:26.587 numjobs=1 00:19:26.587 00:19:26.587 verify_dump=1 00:19:26.587 verify_backlog=512 00:19:26.587 verify_state_save=0 00:19:26.587 do_verify=1 00:19:26.587 verify=crc32c-intel 00:19:26.587 [job0] 00:19:26.587 filename=/dev/nvme0n1 00:19:26.587 [job1] 00:19:26.587 filename=/dev/nvme0n2 00:19:26.587 [job2] 00:19:26.587 filename=/dev/nvme0n3 00:19:26.587 [job3] 00:19:26.587 filename=/dev/nvme0n4 00:19:26.587 Could not set queue depth (nvme0n1) 00:19:26.587 Could not set queue depth (nvme0n2) 00:19:26.587 Could not set queue depth (nvme0n3) 00:19:26.587 Could not set queue depth (nvme0n4) 00:19:26.851 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.851 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.851 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.851 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.851 fio-3.35 00:19:26.851 Starting 4 threads 00:19:28.256 00:19:28.256 job0: (groupid=0, jobs=1): err= 0: pid=3054146: Mon May 13 20:32:43 2024 00:19:28.256 read: IOPS=4819, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1003msec) 00:19:28.256 slat (nsec): min=875, max=49806k, avg=121451.88, stdev=1153804.33 00:19:28.256 clat (usec): min=1792, max=79638, avg=15066.27, stdev=14210.83 00:19:28.256 lat (usec): min=4561, max=79656, avg=15187.72, stdev=14285.02 00:19:28.256 clat percentiles (usec): 00:19:28.256 | 1.00th=[ 5276], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7308], 00:19:28.256 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10814], 00:19:28.256 | 70.00th=[13698], 80.00th=[18482], 90.00th=[24773], 95.00th=[49021], 00:19:28.256 | 99.00th=[70779], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:19:28.256 | 99.99th=[79168] 00:19:28.256 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:19:28.256 slat (nsec): min=1589, max=17501k, avg=73757.14, stdev=552980.87 00:19:28.256 clat (usec): min=1242, max=41734, avg=10549.47, stdev=6718.60 00:19:28.256 lat (usec): min=1252, max=41744, avg=10623.23, stdev=6754.78 00:19:28.256 clat percentiles (usec): 00:19:28.256 | 1.00th=[ 4047], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6194], 00:19:28.256 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 7701], 60.00th=[ 8717], 00:19:28.256 | 70.00th=[10683], 80.00th=[13698], 90.00th=[19530], 95.00th=[28705], 00:19:28.256 | 99.00th=[33424], 99.50th=[33817], 99.90th=[36963], 99.95th=[41681], 00:19:28.256 | 99.99th=[41681] 00:19:28.256 bw ( KiB/s): min=19272, max=21688, per=25.50%, avg=20480.00, stdev=1708.37, samples=2 00:19:28.256 iops : min= 4818, max= 5422, avg=5120.00, stdev=427.09, samples=2 00:19:28.256 lat (msec) : 2=0.03%, 4=0.42%, 10=59.80%, 20=25.88%, 50=11.53% 00:19:28.256 lat (msec) : 100=2.34% 00:19:28.256 cpu : usr=2.79%, sys=4.09%, ctx=514, majf=0, minf=1 00:19:28.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:28.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.256 issued rwts: total=4834,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.256 job1: (groupid=0, jobs=1): err= 0: pid=3054148: Mon May 13 20:32:43 2024 00:19:28.256 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:19:28.256 slat (nsec): min=936, max=13250k, avg=100007.31, stdev=704541.09 00:19:28.256 clat (usec): min=3738, max=39902, avg=11714.39, stdev=4152.57 00:19:28.256 lat (usec): min=3743, max=39912, avg=11814.40, stdev=4208.63 00:19:28.256 clat percentiles (usec): 00:19:28.256 | 1.00th=[ 5997], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 8979], 00:19:28.256 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10552], 60.00th=[11338], 00:19:28.256 | 70.00th=[12518], 80.00th=[13566], 90.00th=[15926], 95.00th=[20055], 00:19:28.256 | 99.00th=[27919], 99.50th=[30016], 99.90th=[40109], 99.95th=[40109], 00:19:28.256 | 99.99th=[40109] 00:19:28.256 write: IOPS=4755, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1009msec); 0 zone resets 00:19:28.256 slat (nsec): min=1608, max=8638.6k, avg=105639.92, stdev=542612.96 00:19:28.256 clat (usec): min=1264, max=44074, avg=15374.12, stdev=8029.61 00:19:28.256 lat (usec): min=1274, max=44088, avg=15479.76, stdev=8084.75 00:19:28.256 clat percentiles (usec): 00:19:28.256 | 1.00th=[ 3851], 5.00th=[ 6587], 10.00th=[ 8029], 20.00th=[ 8848], 00:19:28.256 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[13566], 60.00th=[15795], 00:19:28.256 | 70.00th=[18220], 80.00th=[22152], 90.00th=[27132], 95.00th=[30016], 00:19:28.256 | 99.00th=[41157], 99.50th=[43254], 99.90th=[43779], 99.95th=[44303], 00:19:28.256 | 99.99th=[44303] 00:19:28.256 bw ( KiB/s): min=15944, max=21424, per=23.26%, avg=18684.00, stdev=3874.95, samples=2 00:19:28.256 iops : min= 3986, max= 5356, avg=4671.00, stdev=968.74, samples=2 00:19:28.256 lat (msec) : 2=0.03%, 4=0.58%, 10=33.23%, 20=50.85%, 50=15.30% 00:19:28.256 cpu : usr=2.38%, sys=4.86%, ctx=516, majf=0, minf=1 00:19:28.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:28.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.256 issued rwts: total=4608,4798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.256 job2: (groupid=0, jobs=1): err= 0: pid=3054149: Mon May 13 20:32:43 2024 00:19:28.256 read: IOPS=4753, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1002msec) 00:19:28.256 slat (nsec): min=896, max=17403k, avg=112362.92, stdev=704640.68 00:19:28.257 clat (usec): min=1783, max=53526, avg=14015.55, stdev=8767.78 00:19:28.257 lat (usec): min=2451, max=53552, avg=14127.91, stdev=8835.21 00:19:28.257 clat percentiles (usec): 00:19:28.257 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 8160], 00:19:28.257 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[11731], 00:19:28.257 | 70.00th=[14353], 80.00th=[21103], 90.00th=[25297], 95.00th=[35390], 00:19:28.257 | 99.00th=[43779], 99.50th=[48497], 99.90th=[48497], 99.95th=[49021], 00:19:28.257 | 99.99th=[53740] 00:19:28.257 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:19:28.257 slat (nsec): min=1576, max=11910k, avg=85425.23, stdev=573354.24 00:19:28.257 clat (usec): min=1325, max=43138, avg=11588.40, stdev=6695.21 00:19:28.257 lat (usec): min=1334, max=43148, avg=11673.83, stdev=6734.97 00:19:28.257 clat percentiles (usec): 00:19:28.257 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7373], 00:19:28.257 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9503], 00:19:28.257 | 70.00th=[11207], 80.00th=[16581], 90.00th=[20579], 95.00th=[27395], 00:19:28.257 | 99.00th=[36963], 99.50th=[36963], 99.90th=[40633], 99.95th=[42730], 00:19:28.257 | 99.99th=[43254] 00:19:28.257 bw ( KiB/s): min=13928, max=27032, per=25.50%, avg=20480.00, stdev=9265.93, samples=2 00:19:28.257 iops : min= 3482, max= 6758, avg=5120.00, stdev=2316.48, samples=2 00:19:28.257 lat (msec) : 2=0.21%, 4=0.30%, 10=57.35%, 20=26.47%, 50=15.64% 00:19:28.257 lat (msec) : 100=0.02% 00:19:28.257 cpu : usr=3.60%, sys=3.40%, ctx=481, majf=0, minf=1 00:19:28.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:28.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.257 issued rwts: total=4763,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.257 job3: (groupid=0, jobs=1): err= 0: pid=3054150: Mon May 13 20:32:43 2024 00:19:28.257 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:19:28.257 slat (nsec): min=910, max=19038k, avg=100487.20, stdev=843120.84 00:19:28.257 clat (usec): min=2693, max=48861, avg=13300.65, stdev=8587.05 00:19:28.257 lat (usec): min=2700, max=48880, avg=13401.13, stdev=8636.01 00:19:28.257 clat percentiles (usec): 00:19:28.257 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 8291], 00:19:28.257 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:19:28.257 | 70.00th=[12125], 80.00th=[15795], 90.00th=[25560], 95.00th=[31589], 00:19:28.257 | 99.00th=[46924], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:19:28.257 | 99.99th=[49021] 00:19:28.257 write: IOPS=5202, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1004msec); 0 zone resets 00:19:28.257 slat (nsec): min=1625, max=15903k, avg=84837.14, stdev=564594.94 00:19:28.257 clat (usec): min=1200, max=55768, avg=11298.65, stdev=7088.87 00:19:28.257 lat (usec): min=1209, max=62836, avg=11383.49, stdev=7120.89 00:19:28.257 clat percentiles (usec): 00:19:28.257 | 1.00th=[ 3916], 5.00th=[ 4752], 10.00th=[ 5342], 20.00th=[ 6587], 00:19:28.257 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[10290], 00:19:28.257 | 70.00th=[12518], 80.00th=[14484], 90.00th=[17171], 95.00th=[26346], 00:19:28.257 | 99.00th=[37487], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:19:28.257 | 99.99th=[55837] 00:19:28.257 bw ( KiB/s): min=16408, max=24576, per=25.51%, avg=20492.00, stdev=5775.65, samples=2 00:19:28.257 iops : min= 4102, max= 6144, avg=5123.00, stdev=1443.91, samples=2 00:19:28.257 lat (msec) : 2=0.03%, 4=1.12%, 10=49.81%, 20=38.62%, 50=10.14% 00:19:28.257 lat (msec) : 100=0.28% 00:19:28.257 cpu : usr=4.09%, sys=3.99%, ctx=400, majf=0, minf=1 00:19:28.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:28.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.257 issued rwts: total=5120,5223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.257 00:19:28.257 Run status group 0 (all jobs): 00:19:28.257 READ: bw=74.8MiB/s (78.4MB/s), 17.8MiB/s-19.9MiB/s (18.7MB/s-20.9MB/s), io=75.5MiB (79.2MB), run=1002-1009msec 00:19:28.257 WRITE: bw=78.4MiB/s (82.2MB/s), 18.6MiB/s-20.3MiB/s (19.5MB/s-21.3MB/s), io=79.1MiB (83.0MB), run=1002-1009msec 00:19:28.257 00:19:28.257 Disk stats (read/write): 00:19:28.257 nvme0n1: ios=4145/4323, merge=0/0, ticks=26345/21752, in_queue=48097, util=82.87% 00:19:28.257 nvme0n2: ios=3727/4096, merge=0/0, ticks=42197/60193, in_queue=102390, util=89.30% 00:19:28.257 nvme0n3: ios=3641/3993, merge=0/0, ticks=20358/14657, in_queue=35015, util=89.35% 00:19:28.257 nvme0n4: ios=4628/4639, merge=0/0, ticks=37430/31999, in_queue=69429, util=92.00% 00:19:28.257 20:32:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:28.257 [global] 00:19:28.257 thread=1 00:19:28.257 invalidate=1 00:19:28.257 rw=randwrite 00:19:28.257 time_based=1 00:19:28.257 runtime=1 00:19:28.257 ioengine=libaio 00:19:28.257 direct=1 00:19:28.257 bs=4096 00:19:28.257 iodepth=128 00:19:28.257 norandommap=0 00:19:28.257 numjobs=1 00:19:28.257 00:19:28.257 verify_dump=1 00:19:28.257 verify_backlog=512 00:19:28.257 verify_state_save=0 00:19:28.257 do_verify=1 00:19:28.257 verify=crc32c-intel 00:19:28.257 [job0] 00:19:28.257 filename=/dev/nvme0n1 00:19:28.257 [job1] 00:19:28.257 filename=/dev/nvme0n2 00:19:28.257 [job2] 00:19:28.257 filename=/dev/nvme0n3 00:19:28.257 [job3] 00:19:28.257 filename=/dev/nvme0n4 00:19:28.257 Could not set queue depth (nvme0n1) 00:19:28.257 Could not set queue depth (nvme0n2) 00:19:28.257 Could not set queue depth (nvme0n3) 00:19:28.257 Could not set queue depth (nvme0n4) 00:19:28.524 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:28.524 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:28.524 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:28.524 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:28.524 fio-3.35 00:19:28.524 Starting 4 threads 00:19:29.944 00:19:29.944 job0: (groupid=0, jobs=1): err= 0: pid=3054641: Mon May 13 20:32:45 2024 00:19:29.944 read: IOPS=4841, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1004msec) 00:19:29.944 slat (nsec): min=994, max=21134k, avg=106413.15, stdev=849100.43 00:19:29.944 clat (usec): min=1146, max=59684, avg=13762.50, stdev=10375.55 00:19:29.944 lat (usec): min=1311, max=59696, avg=13868.92, stdev=10442.22 00:19:29.944 clat percentiles (usec): 00:19:29.944 | 1.00th=[ 1975], 5.00th=[ 2573], 10.00th=[ 4359], 20.00th=[ 6783], 00:19:29.944 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[11600], 00:19:29.944 | 70.00th=[16450], 80.00th=[18220], 90.00th=[27919], 95.00th=[35390], 00:19:29.944 | 99.00th=[53216], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:19:29.944 | 99.99th=[59507] 00:19:29.944 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:19:29.944 slat (nsec): min=1634, max=12822k, avg=78878.47, stdev=537723.38 00:19:29.944 clat (usec): min=1153, max=56553, avg=11798.77, stdev=7036.68 00:19:29.944 lat (usec): min=1163, max=56555, avg=11877.65, stdev=7069.75 00:19:29.944 clat percentiles (usec): 00:19:29.944 | 1.00th=[ 2057], 5.00th=[ 3523], 10.00th=[ 4686], 20.00th=[ 5473], 00:19:29.944 | 30.00th=[ 7701], 40.00th=[ 9110], 50.00th=[10683], 60.00th=[12911], 00:19:29.944 | 70.00th=[15533], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:19:29.944 | 99.00th=[41157], 99.50th=[49021], 99.90th=[52167], 99.95th=[53216], 00:19:29.944 | 99.99th=[56361] 00:19:29.944 bw ( KiB/s): min=19352, max=21608, per=29.16%, avg=20480.00, stdev=1595.23, samples=2 00:19:29.944 iops : min= 4838, max= 5402, avg=5120.00, stdev=398.81, samples=2 00:19:29.944 lat (msec) : 2=0.91%, 4=7.98%, 10=39.71%, 20=41.58%, 50=8.67% 00:19:29.944 lat (msec) : 100=1.16% 00:19:29.944 cpu : usr=4.59%, sys=4.99%, ctx=387, majf=0, minf=1 00:19:29.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:29.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.944 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.944 job1: (groupid=0, jobs=1): err= 0: pid=3054651: Mon May 13 20:32:45 2024 00:19:29.944 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:19:29.944 slat (nsec): min=904, max=19745k, avg=122195.58, stdev=897443.61 00:19:29.944 clat (usec): min=1403, max=65616, avg=15809.10, stdev=11671.03 00:19:29.944 lat (usec): min=1409, max=66279, avg=15931.30, stdev=11750.91 00:19:29.944 clat percentiles (usec): 00:19:29.944 | 1.00th=[ 2114], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5932], 00:19:29.944 | 30.00th=[ 8455], 40.00th=[10159], 50.00th=[14091], 60.00th=[14484], 00:19:29.944 | 70.00th=[19792], 80.00th=[21890], 90.00th=[32375], 95.00th=[39060], 00:19:29.944 | 99.00th=[60031], 99.50th=[63701], 99.90th=[64226], 99.95th=[64750], 00:19:29.944 | 99.99th=[65799] 00:19:29.944 write: IOPS=4419, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1008msec); 0 zone resets 00:19:29.944 slat (nsec): min=1538, max=22708k, avg=101941.74, stdev=630039.56 00:19:29.944 clat (usec): min=1781, max=51814, avg=14079.96, stdev=8649.55 00:19:29.944 lat (usec): min=1788, max=51836, avg=14181.91, stdev=8701.55 00:19:29.944 clat percentiles (usec): 00:19:29.944 | 1.00th=[ 2966], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 7308], 00:19:29.944 | 30.00th=[ 8160], 40.00th=[ 9241], 50.00th=[10814], 60.00th=[14222], 00:19:29.944 | 70.00th=[17171], 80.00th=[20055], 90.00th=[25822], 95.00th=[31065], 00:19:29.944 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:19:29.944 | 99.99th=[51643] 00:19:29.944 bw ( KiB/s): min=14136, max=20480, per=24.65%, avg=17308.00, stdev=4485.89, samples=2 00:19:29.944 iops : min= 3534, max= 5120, avg=4327.00, stdev=1121.47, samples=2 00:19:29.944 lat (msec) : 2=0.33%, 4=2.48%, 10=39.05%, 20=33.18%, 50=23.51% 00:19:29.944 lat (msec) : 100=1.46% 00:19:29.944 cpu : usr=2.68%, sys=4.07%, ctx=476, majf=0, minf=1 00:19:29.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:29.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.944 issued rwts: total=4096,4455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.944 job2: (groupid=0, jobs=1): err= 0: pid=3054668: Mon May 13 20:32:45 2024 00:19:29.945 read: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(12.1MiB/1012msec) 00:19:29.945 slat (nsec): min=957, max=26131k, avg=148506.93, stdev=1160388.07 00:19:29.945 clat (usec): min=3232, max=74274, avg=18422.92, stdev=12328.89 00:19:29.945 lat (usec): min=3237, max=74283, avg=18571.43, stdev=12408.01 00:19:29.945 clat percentiles (usec): 00:19:29.945 | 1.00th=[ 4817], 5.00th=[ 6980], 10.00th=[ 9634], 20.00th=[10028], 00:19:29.945 | 30.00th=[10290], 40.00th=[11600], 50.00th=[14091], 60.00th=[17433], 00:19:29.945 | 70.00th=[20317], 80.00th=[24249], 90.00th=[34866], 95.00th=[42206], 00:19:29.945 | 99.00th=[68682], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:19:29.945 | 99.99th=[73925] 00:19:29.945 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:19:29.945 slat (nsec): min=1645, max=44436k, avg=138051.02, stdev=1006464.77 00:19:29.945 clat (usec): min=1991, max=74238, avg=18540.09, stdev=8729.81 00:19:29.945 lat (usec): min=1999, max=74240, avg=18678.14, stdev=8780.22 00:19:29.945 clat percentiles (usec): 00:19:29.945 | 1.00th=[ 3556], 5.00th=[ 6980], 10.00th=[ 9110], 20.00th=[11994], 00:19:29.945 | 30.00th=[14746], 40.00th=[16712], 50.00th=[17171], 60.00th=[17695], 00:19:29.945 | 70.00th=[19792], 80.00th=[24511], 90.00th=[30016], 95.00th=[35390], 00:19:29.945 | 99.00th=[46924], 99.50th=[56361], 99.90th=[63177], 99.95th=[73925], 00:19:29.945 | 99.99th=[73925] 00:19:29.945 bw ( KiB/s): min=12288, max=15480, per=19.77%, avg=13884.00, stdev=2257.08, samples=2 00:19:29.945 iops : min= 3072, max= 3870, avg=3471.00, stdev=564.27, samples=2 00:19:29.945 lat (msec) : 2=0.09%, 4=0.61%, 10=16.06%, 20=53.10%, 50=28.38% 00:19:29.945 lat (msec) : 100=1.75% 00:19:29.945 cpu : usr=1.88%, sys=3.46%, ctx=433, majf=0, minf=1 00:19:29.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:29.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.945 issued rwts: total=3086,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.945 job3: (groupid=0, jobs=1): err= 0: pid=3054673: Mon May 13 20:32:45 2024 00:19:29.945 read: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1003msec) 00:19:29.945 slat (nsec): min=951, max=17603k, avg=104474.83, stdev=687771.90 00:19:29.945 clat (usec): min=1313, max=52451, avg=13891.08, stdev=8097.91 00:19:29.945 lat (usec): min=4115, max=52456, avg=13995.55, stdev=8140.61 00:19:29.945 clat percentiles (usec): 00:19:29.945 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 9765], 00:19:29.945 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:19:29.945 | 70.00th=[13435], 80.00th=[14091], 90.00th=[22414], 95.00th=[35914], 00:19:29.945 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:29.945 | 99.99th=[52691] 00:19:29.945 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:19:29.945 slat (nsec): min=1588, max=16928k, avg=95036.74, stdev=566164.98 00:19:29.945 clat (usec): min=1293, max=79405, avg=13932.07, stdev=8490.97 00:19:29.945 lat (usec): min=1303, max=79409, avg=14027.11, stdev=8533.05 00:19:29.945 clat percentiles (usec): 00:19:29.945 | 1.00th=[ 1385], 5.00th=[ 2769], 10.00th=[ 4555], 20.00th=[ 7373], 00:19:29.945 | 30.00th=[10159], 40.00th=[10683], 50.00th=[12780], 60.00th=[15664], 00:19:29.945 | 70.00th=[17695], 80.00th=[19006], 90.00th=[21103], 95.00th=[24249], 00:19:29.945 | 99.00th=[47449], 99.50th=[61080], 99.90th=[76022], 99.95th=[76022], 00:19:29.945 | 99.99th=[79168] 00:19:29.945 bw ( KiB/s): min=18376, max=18488, per=26.25%, avg=18432.00, stdev=79.20, samples=2 00:19:29.945 iops : min= 4594, max= 4622, avg=4608.00, stdev=19.80, samples=2 00:19:29.945 lat (msec) : 2=1.84%, 4=1.34%, 10=21.42%, 20=63.54%, 50=11.44% 00:19:29.945 lat (msec) : 100=0.43% 00:19:29.945 cpu : usr=2.79%, sys=4.99%, ctx=503, majf=0, minf=1 00:19:29.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:29.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.945 issued rwts: total=4539,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.945 00:19:29.945 Run status group 0 (all jobs): 00:19:29.945 READ: bw=64.0MiB/s (67.1MB/s), 11.9MiB/s-18.9MiB/s (12.5MB/s-19.8MB/s), io=64.8MiB (67.9MB), run=1003-1012msec 00:19:29.945 WRITE: bw=68.6MiB/s (71.9MB/s), 13.8MiB/s-19.9MiB/s (14.5MB/s-20.9MB/s), io=69.4MiB (72.8MB), run=1003-1012msec 00:19:29.945 00:19:29.945 Disk stats (read/write): 00:19:29.945 nvme0n1: ios=3624/3719, merge=0/0, ticks=55946/45870, in_queue=101816, util=87.27% 00:19:29.945 nvme0n2: ios=3624/3923, merge=0/0, ticks=22540/24587, in_queue=47127, util=91.23% 00:19:29.945 nvme0n3: ios=2615/2775, merge=0/0, ticks=43576/44736, in_queue=88312, util=92.83% 00:19:29.945 nvme0n4: ios=3633/3759, merge=0/0, ticks=33146/32051, in_queue=65197, util=97.23% 00:19:29.945 20:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:29.945 20:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3054743 00:19:29.945 20:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:29.945 20:32:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:29.945 [global] 00:19:29.945 thread=1 00:19:29.945 invalidate=1 00:19:29.945 rw=read 00:19:29.945 time_based=1 00:19:29.945 runtime=10 00:19:29.945 ioengine=libaio 00:19:29.945 direct=1 00:19:29.945 bs=4096 00:19:29.945 iodepth=1 00:19:29.945 norandommap=1 00:19:29.945 numjobs=1 00:19:29.945 00:19:29.945 [job0] 00:19:29.945 filename=/dev/nvme0n1 00:19:29.945 [job1] 00:19:29.945 filename=/dev/nvme0n2 00:19:29.945 [job2] 00:19:29.945 filename=/dev/nvme0n3 00:19:29.945 [job3] 00:19:29.945 filename=/dev/nvme0n4 00:19:29.945 Could not set queue depth (nvme0n1) 00:19:29.945 Could not set queue depth (nvme0n2) 00:19:29.945 Could not set queue depth (nvme0n3) 00:19:29.945 Could not set queue depth (nvme0n4) 00:19:30.212 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.212 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.212 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.212 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.212 fio-3.35 00:19:30.212 Starting 4 threads 00:19:32.761 20:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:33.024 20:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:33.024 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=253952, buflen=4096 00:19:33.024 fio: pid=3055165, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.024 20:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.024 20:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:33.024 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=294912, buflen=4096 00:19:33.024 fio: pid=3055158, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.285 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10063872, buflen=4096 00:19:33.285 fio: pid=3055132, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.285 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.285 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:33.589 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2990080, buflen=4096 00:19:33.589 fio: pid=3055141, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:33.589 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.589 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:33.589 00:19:33.589 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3055132: Mon May 13 20:32:49 2024 00:19:33.589 read: IOPS=842, BW=3368KiB/s (3449kB/s)(9828KiB/2918msec) 00:19:33.589 slat (usec): min=7, max=32545, avg=42.10, stdev=670.02 00:19:33.589 clat (usec): min=487, max=2644, avg=1134.78, stdev=78.21 00:19:33.589 lat (usec): min=514, max=33737, avg=1176.88, stdev=675.88 00:19:33.589 clat percentiles (usec): 00:19:33.589 | 1.00th=[ 889], 5.00th=[ 1004], 10.00th=[ 1057], 20.00th=[ 1090], 00:19:33.589 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:19:33.589 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1205], 95.00th=[ 1237], 00:19:33.589 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[ 1369], 00:19:33.589 | 99.99th=[ 2638] 00:19:33.589 bw ( KiB/s): min= 3392, max= 3416, per=79.82%, avg=3403.20, stdev= 9.12, samples=5 00:19:33.589 iops : min= 848, max= 854, avg=850.80, stdev= 2.28, samples=5 00:19:33.589 lat (usec) : 500=0.04%, 1000=4.60% 00:19:33.589 lat (msec) : 2=95.28%, 4=0.04% 00:19:33.589 cpu : usr=1.92%, sys=2.85%, ctx=2462, majf=0, minf=1 00:19:33.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.589 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.589 issued rwts: total=2458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.589 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.589 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3055141: Mon May 13 20:32:49 2024 00:19:33.589 read: IOPS=234, BW=937KiB/s (960kB/s)(2920KiB/3116msec) 00:19:33.589 slat (usec): min=7, max=7790, avg=36.02, stdev=287.35 00:19:33.589 clat (usec): min=731, max=42062, avg=4196.86, stdev=10775.62 00:19:33.589 lat (usec): min=761, max=48958, avg=4232.90, stdev=10817.29 00:19:33.589 clat percentiles (usec): 00:19:33.589 | 1.00th=[ 865], 5.00th=[ 955], 10.00th=[ 1004], 20.00th=[ 1057], 00:19:33.589 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:19:33.589 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[41157], 00:19:33.589 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:33.589 | 99.99th=[42206] 00:19:33.589 bw ( KiB/s): min= 95, max= 3008, per=22.73%, avg=969.17, stdev=1362.92, samples=6 00:19:33.589 iops : min= 23, max= 752, avg=242.17, stdev=340.83, samples=6 00:19:33.589 lat (usec) : 750=0.14%, 1000=9.30% 00:19:33.589 lat (msec) : 2=82.76%, 50=7.66% 00:19:33.589 cpu : usr=0.39%, sys=0.58%, ctx=734, majf=0, minf=1 00:19:33.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.589 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.589 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.589 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.589 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3055158: Mon May 13 20:32:49 2024 00:19:33.589 read: IOPS=26, BW=104KiB/s (106kB/s)(288KiB/2779msec) 00:19:33.589 slat (usec): min=8, max=13533, avg=210.35, stdev=1580.98 00:19:33.589 clat (usec): min=647, max=42523, avg=38239.53, stdev=11297.30 00:19:33.589 lat (usec): min=681, max=56057, avg=38452.46, stdev=11480.14 00:19:33.589 clat percentiles (usec): 00:19:33.589 | 1.00th=[ 652], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41157], 00:19:33.589 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:19:33.589 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:33.589 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:33.589 | 99.99th=[42730] 00:19:33.589 bw ( KiB/s): min= 96, max= 104, per=2.28%, avg=97.60, stdev= 3.58, samples=5 00:19:33.589 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:19:33.589 lat (usec) : 750=1.37% 00:19:33.589 lat (msec) : 2=6.85%, 50=90.41% 00:19:33.589 cpu : usr=0.00%, sys=0.14%, ctx=74, majf=0, minf=1 00:19:33.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.590 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.590 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.590 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3055165: Mon May 13 20:32:49 2024 00:19:33.590 read: IOPS=24, BW=95.2KiB/s (97.4kB/s)(248KiB/2606msec) 00:19:33.590 slat (nsec): min=24237, max=39238, avg=24896.44, stdev=1851.43 00:19:33.590 clat (usec): min=1238, max=43031, avg=41486.10, stdev=5209.03 00:19:33.590 lat (usec): min=1277, max=43055, avg=41511.00, stdev=5207.18 00:19:33.590 clat percentiles (usec): 00:19:33.590 | 1.00th=[ 1237], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:33.590 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:33.590 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:19:33.590 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:33.590 | 99.99th=[43254] 00:19:33.590 bw ( KiB/s): min= 96, max= 96, per=2.25%, avg=96.00, stdev= 0.00, samples=5 00:19:33.590 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:19:33.590 lat (msec) : 2=1.59%, 50=96.83% 00:19:33.590 cpu : usr=0.08%, sys=0.00%, ctx=63, majf=0, minf=2 00:19:33.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.590 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.590 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.590 00:19:33.590 Run status group 0 (all jobs): 00:19:33.590 READ: bw=4263KiB/s (4365kB/s), 95.2KiB/s-3368KiB/s (97.4kB/s-3449kB/s), io=13.0MiB (13.6MB), run=2606-3116msec 00:19:33.590 00:19:33.590 Disk stats (read/write): 00:19:33.590 nvme0n1: ios=2391/0, merge=0/0, ticks=2515/0, in_queue=2515, util=93.52% 00:19:33.590 nvme0n2: ios=729/0, merge=0/0, ticks=3005/0, in_queue=3005, util=95.45% 00:19:33.590 nvme0n3: ios=63/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.03% 00:19:33.590 nvme0n4: ios=62/0, merge=0/0, ticks=2574/0, in_queue=2574, util=96.42% 00:19:33.590 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.590 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:33.887 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.887 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:33.887 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:33.887 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:34.148 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:34.148 20:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:34.148 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:34.148 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3054743 00:19:34.148 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:34.148 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:34.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:34.410 nvmf hotplug test: fio failed as expected 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.410 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:34.671 rmmod nvme_tcp 00:19:34.671 rmmod nvme_fabrics 00:19:34.671 rmmod nvme_keyring 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3051241 ']' 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3051241 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3051241 ']' 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3051241 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3051241 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3051241' 00:19:34.671 killing process with pid 3051241 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3051241 00:19:34.671 [2024-05-13 20:32:50.483880] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:34.671 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3051241 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.933 20:32:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.851 20:32:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:36.851 00:19:36.851 real 0m29.460s 00:19:36.851 user 2m32.112s 00:19:36.851 sys 0m9.440s 00:19:36.851 20:32:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:36.851 20:32:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.851 ************************************ 00:19:36.851 END TEST nvmf_fio_target 00:19:36.851 ************************************ 00:19:36.851 20:32:52 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:36.851 20:32:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:36.851 20:32:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:36.851 20:32:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:36.851 ************************************ 00:19:36.851 START TEST nvmf_bdevio 00:19:36.851 ************************************ 00:19:36.851 20:32:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:37.114 * Looking for test storage... 00:19:37.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.114 20:32:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:45.259 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:45.259 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:45.259 Found net devices under 0000:31:00.0: cvl_0_0 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.259 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:45.259 Found net devices under 0000:31:00.1: cvl_0_1 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:19:45.260 00:19:45.260 --- 10.0.0.2 ping statistics --- 00:19:45.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.260 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:19:45.260 00:19:45.260 --- 10.0.0.1 ping statistics --- 00:19:45.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.260 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3060634 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3060634 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3060634 ']' 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:45.260 20:33:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:45.260 [2024-05-13 20:33:01.026218] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:19:45.260 [2024-05-13 20:33:01.026281] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.260 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.260 [2024-05-13 20:33:01.108442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.260 [2024-05-13 20:33:01.181651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.260 [2024-05-13 20:33:01.181689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.260 [2024-05-13 20:33:01.181697] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.260 [2024-05-13 20:33:01.181704] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.260 [2024-05-13 20:33:01.181709] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.260 [2024-05-13 20:33:01.181858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.260 [2024-05-13 20:33:01.181997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:45.260 [2024-05-13 20:33:01.182446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:45.260 [2024-05-13 20:33:01.182447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.230 [2024-05-13 20:33:01.859892] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.230 Malloc0 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.230 [2024-05-13 20:33:01.918894] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:46.230 [2024-05-13 20:33:01.919154] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.230 { 00:19:46.230 "params": { 00:19:46.230 "name": "Nvme$subsystem", 00:19:46.230 "trtype": "$TEST_TRANSPORT", 00:19:46.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.230 "adrfam": "ipv4", 00:19:46.230 "trsvcid": "$NVMF_PORT", 00:19:46.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.230 "hdgst": ${hdgst:-false}, 00:19:46.230 "ddgst": ${ddgst:-false} 00:19:46.230 }, 00:19:46.230 "method": "bdev_nvme_attach_controller" 00:19:46.230 } 00:19:46.230 EOF 00:19:46.230 )") 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:46.230 20:33:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.230 "params": { 00:19:46.230 "name": "Nvme1", 00:19:46.230 "trtype": "tcp", 00:19:46.230 "traddr": "10.0.0.2", 00:19:46.230 "adrfam": "ipv4", 00:19:46.230 "trsvcid": "4420", 00:19:46.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.230 "hdgst": false, 00:19:46.230 "ddgst": false 00:19:46.230 }, 00:19:46.230 "method": "bdev_nvme_attach_controller" 00:19:46.230 }' 00:19:46.230 [2024-05-13 20:33:01.970193] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:19:46.230 [2024-05-13 20:33:01.970243] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061031 ] 00:19:46.230 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.230 [2024-05-13 20:33:02.036249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.230 [2024-05-13 20:33:02.102932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.230 [2024-05-13 20:33:02.103056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.231 [2024-05-13 20:33:02.103059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.490 I/O targets: 00:19:46.490 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:46.490 00:19:46.490 00:19:46.490 CUnit - A unit testing framework for C - Version 2.1-3 00:19:46.490 http://cunit.sourceforge.net/ 00:19:46.490 00:19:46.490 00:19:46.490 Suite: bdevio tests on: Nvme1n1 00:19:46.490 Test: blockdev write read block ...passed 00:19:46.750 Test: blockdev write zeroes read block ...passed 00:19:46.750 Test: blockdev write zeroes read no split ...passed 00:19:46.750 Test: blockdev write zeroes read split ...passed 00:19:46.750 Test: blockdev write zeroes read split partial ...passed 00:19:46.750 Test: blockdev reset ...[2024-05-13 20:33:02.544239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.750 [2024-05-13 20:33:02.544300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141d340 (9): Bad file descriptor 00:19:46.750 [2024-05-13 20:33:02.559028] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:46.750 passed 00:19:46.750 Test: blockdev write read 8 blocks ...passed 00:19:46.750 Test: blockdev write read size > 128k ...passed 00:19:46.750 Test: blockdev write read invalid size ...passed 00:19:46.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:46.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:46.750 Test: blockdev write read max offset ...passed 00:19:46.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:46.750 Test: blockdev writev readv 8 blocks ...passed 00:19:46.750 Test: blockdev writev readv 30 x 1block ...passed 00:19:47.010 Test: blockdev writev readv block ...passed 00:19:47.010 Test: blockdev writev readv size > 128k ...passed 00:19:47.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:47.010 Test: blockdev comparev and writev ...[2024-05-13 20:33:02.738363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.738388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.738398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.738404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.738738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.738755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.739152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.739159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.739168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.739173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.739503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.739511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.739520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:47.010 [2024-05-13 20:33:02.739525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:47.010 passed 00:19:47.010 Test: blockdev nvme passthru rw ...passed 00:19:47.010 Test: blockdev nvme passthru vendor specific ...[2024-05-13 20:33:02.823762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.010 [2024-05-13 20:33:02.823771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.824012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.010 [2024-05-13 20:33:02.824018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.824240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.010 [2024-05-13 20:33:02.824247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:47.010 [2024-05-13 20:33:02.824464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.010 [2024-05-13 20:33:02.824471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:47.010 passed 00:19:47.010 Test: blockdev nvme admin passthru ...passed 00:19:47.010 Test: blockdev copy ...passed 00:19:47.010 00:19:47.010 Run Summary: Type Total Ran Passed Failed Inactive 00:19:47.010 suites 1 1 n/a 0 0 00:19:47.010 tests 23 23 23 0 0 00:19:47.010 asserts 152 152 152 0 n/a 00:19:47.010 00:19:47.010 Elapsed time = 1.015 seconds 00:19:47.271 20:33:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.271 20:33:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.271 20:33:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.271 rmmod nvme_tcp 00:19:47.271 rmmod nvme_fabrics 00:19:47.271 rmmod nvme_keyring 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3060634 ']' 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3060634 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3060634 ']' 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3060634 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3060634 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3060634' 00:19:47.271 killing process with pid 3060634 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3060634 00:19:47.271 [2024-05-13 20:33:03.134907] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:47.271 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3060634 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.533 20:33:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.443 20:33:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:49.443 00:19:49.443 real 0m12.582s 00:19:49.443 user 0m12.840s 00:19:49.443 sys 0m6.437s 00:19:49.443 20:33:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:49.443 20:33:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:49.443 ************************************ 00:19:49.443 END TEST nvmf_bdevio 00:19:49.443 ************************************ 00:19:49.704 20:33:05 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:49.704 20:33:05 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:49.704 20:33:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:49.704 20:33:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:49.704 20:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 ************************************ 00:19:49.704 START TEST nvmf_bdevio_no_huge 00:19:49.704 ************************************ 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:49.704 * Looking for test storage... 00:19:49.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.704 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.705 20:33:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:57.847 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:57.847 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:57.847 Found net devices under 0000:31:00.0: cvl_0_0 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:57.847 Found net devices under 0000:31:00.1: cvl_0_1 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:19:57.847 00:19:57.847 --- 10.0.0.2 ping statistics --- 00:19:57.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.847 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:19:57.847 00:19:57.847 --- 10.0.0.1 ping statistics --- 00:19:57.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.847 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.847 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3066329 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3066329 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3066329 ']' 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:57.848 20:33:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.848 [2024-05-13 20:33:13.774851] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:19:57.848 [2024-05-13 20:33:13.774919] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:58.109 [2024-05-13 20:33:13.863385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.109 [2024-05-13 20:33:13.960386] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.109 [2024-05-13 20:33:13.960420] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.109 [2024-05-13 20:33:13.960428] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.109 [2024-05-13 20:33:13.960434] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.109 [2024-05-13 20:33:13.960440] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.109 [2024-05-13 20:33:13.960611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:58.109 [2024-05-13 20:33:13.960843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:58.109 [2024-05-13 20:33:13.960994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:58.109 [2024-05-13 20:33:13.960995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 [2024-05-13 20:33:14.608659] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.678 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.937 Malloc0 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.937 [2024-05-13 20:33:14.660772] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:58.937 [2024-05-13 20:33:14.661014] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:58.937 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:58.938 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.938 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.938 { 00:19:58.938 "params": { 00:19:58.938 "name": "Nvme$subsystem", 00:19:58.938 "trtype": "$TEST_TRANSPORT", 00:19:58.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.938 "adrfam": "ipv4", 00:19:58.938 "trsvcid": "$NVMF_PORT", 00:19:58.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.938 "hdgst": ${hdgst:-false}, 00:19:58.938 "ddgst": ${ddgst:-false} 00:19:58.938 }, 00:19:58.938 "method": "bdev_nvme_attach_controller" 00:19:58.938 } 00:19:58.938 EOF 00:19:58.938 )") 00:19:58.938 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:58.938 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:58.938 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:58.938 20:33:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:58.938 "params": { 00:19:58.938 "name": "Nvme1", 00:19:58.938 "trtype": "tcp", 00:19:58.938 "traddr": "10.0.0.2", 00:19:58.938 "adrfam": "ipv4", 00:19:58.938 "trsvcid": "4420", 00:19:58.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.938 "hdgst": false, 00:19:58.938 "ddgst": false 00:19:58.938 }, 00:19:58.938 "method": "bdev_nvme_attach_controller" 00:19:58.938 }' 00:19:58.938 [2024-05-13 20:33:14.720547] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:19:58.938 [2024-05-13 20:33:14.720618] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3066537 ] 00:19:58.938 [2024-05-13 20:33:14.788903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.199 [2024-05-13 20:33:14.881779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.199 [2024-05-13 20:33:14.881896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.199 [2024-05-13 20:33:14.881899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.199 I/O targets: 00:19:59.199 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:59.199 00:19:59.199 00:19:59.199 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.199 http://cunit.sourceforge.net/ 00:19:59.199 00:19:59.199 00:19:59.199 Suite: bdevio tests on: Nvme1n1 00:19:59.199 Test: blockdev write read block ...passed 00:19:59.199 Test: blockdev write zeroes read block ...passed 00:19:59.199 Test: blockdev write zeroes read no split ...passed 00:19:59.460 Test: blockdev write zeroes read split ...passed 00:19:59.460 Test: blockdev write zeroes read split partial ...passed 00:19:59.460 Test: blockdev reset ...[2024-05-13 20:33:15.242722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:59.460 [2024-05-13 20:33:15.242777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a39f0 (9): Bad file descriptor 00:19:59.460 [2024-05-13 20:33:15.311494] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.460 passed 00:19:59.460 Test: blockdev write read 8 blocks ...passed 00:19:59.460 Test: blockdev write read size > 128k ...passed 00:19:59.460 Test: blockdev write read invalid size ...passed 00:19:59.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.460 Test: blockdev write read max offset ...passed 00:19:59.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.719 Test: blockdev writev readv 8 blocks ...passed 00:19:59.719 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.719 Test: blockdev writev readv block ...passed 00:19:59.719 Test: blockdev writev readv size > 128k ...passed 00:19:59.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.719 Test: blockdev comparev and writev ...[2024-05-13 20:33:15.580285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.580310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.580324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.580330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.580880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.580888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.580898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.580903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.581449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.581457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.581467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.581472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.581993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.582000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:59.719 [2024-05-13 20:33:15.582010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:59.719 [2024-05-13 20:33:15.582015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:59.719 passed 00:19:59.978 Test: blockdev nvme passthru rw ...passed 00:19:59.978 Test: blockdev nvme passthru vendor specific ...[2024-05-13 20:33:15.667290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.978 [2024-05-13 20:33:15.667301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:59.978 [2024-05-13 20:33:15.667704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.978 [2024-05-13 20:33:15.667711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:59.978 [2024-05-13 20:33:15.668099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.978 [2024-05-13 20:33:15.668106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:59.978 [2024-05-13 20:33:15.668509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.978 [2024-05-13 20:33:15.668516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:59.978 passed 00:19:59.978 Test: blockdev nvme admin passthru ...passed 00:19:59.978 Test: blockdev copy ...passed 00:19:59.978 00:19:59.978 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.978 suites 1 1 n/a 0 0 00:19:59.978 tests 23 23 23 0 0 00:19:59.978 asserts 152 152 152 0 n/a 00:19:59.978 00:19:59.978 Elapsed time = 1.419 seconds 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.238 20:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.238 rmmod nvme_tcp 00:20:00.238 rmmod nvme_fabrics 00:20:00.238 rmmod nvme_keyring 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3066329 ']' 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3066329 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3066329 ']' 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3066329 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3066329 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3066329' 00:20:00.238 killing process with pid 3066329 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3066329 00:20:00.238 [2024-05-13 20:33:16.110392] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:00.238 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3066329 00:20:00.497 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.498 20:33:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.036 20:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:03.036 00:20:03.036 real 0m13.037s 00:20:03.036 user 0m14.022s 00:20:03.036 sys 0m6.961s 00:20:03.036 20:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:03.036 20:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.036 ************************************ 00:20:03.036 END TEST nvmf_bdevio_no_huge 00:20:03.036 ************************************ 00:20:03.036 20:33:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:03.036 20:33:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:03.036 20:33:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:03.036 20:33:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:03.036 ************************************ 00:20:03.036 START TEST nvmf_tls 00:20:03.036 ************************************ 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:03.036 * Looking for test storage... 00:20:03.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:03.036 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:03.037 20:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:11.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:11.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:11.182 Found net devices under 0000:31:00.0: cvl_0_0 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:11.182 Found net devices under 0000:31:00.1: cvl_0_1 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.786 ms 00:20:11.182 00:20:11.182 --- 10.0.0.2 ping statistics --- 00:20:11.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.182 rtt min/avg/max/mdev = 0.786/0.786/0.786/0.000 ms 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:11.182 00:20:11.182 --- 10.0.0.1 ping statistics --- 00:20:11.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.182 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3071564 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3071564 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3071564 ']' 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.182 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:11.183 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.183 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:11.183 20:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.183 [2024-05-13 20:33:26.867242] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:11.183 [2024-05-13 20:33:26.867291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.183 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.183 [2024-05-13 20:33:26.958372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.183 [2024-05-13 20:33:27.022826] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.183 [2024-05-13 20:33:27.022866] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.183 [2024-05-13 20:33:27.022873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.183 [2024-05-13 20:33:27.022880] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.183 [2024-05-13 20:33:27.022886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.183 [2024-05-13 20:33:27.022906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.756 20:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:11.757 20:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:11.757 20:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.757 20:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.757 20:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.018 20:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.018 20:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:12.018 20:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:12.018 true 00:20:12.018 20:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.018 20:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:12.279 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:12.279 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:12.279 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:12.588 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.588 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:12.588 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:12.588 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:12.588 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:12.855 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.855 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:13.117 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:13.117 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:13.117 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.117 20:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:13.117 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:13.117 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:13.117 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:13.377 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.377 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:13.638 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:13.638 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:13.638 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:13.638 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.638 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.hwDglSf929 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.vQhwefbcxn 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hwDglSf929 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vQhwefbcxn 00:20:13.900 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:14.161 20:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:14.422 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hwDglSf929 00:20:14.422 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hwDglSf929 00:20:14.422 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.422 [2024-05-13 20:33:30.296875] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.422 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.682 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.944 [2024-05-13 20:33:30.633680] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:14.944 [2024-05-13 20:33:30.633728] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.944 [2024-05-13 20:33:30.633895] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.944 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.944 malloc0 00:20:14.944 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.206 20:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hwDglSf929 00:20:15.206 [2024-05-13 20:33:31.052727] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.206 20:33:31 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hwDglSf929 00:20:15.206 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.211 Initializing NVMe Controllers 00:20:25.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:25.211 Initialization complete. Launching workers. 00:20:25.211 ======================================================== 00:20:25.211 Latency(us) 00:20:25.211 Device Information : IOPS MiB/s Average min max 00:20:25.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19417.80 75.85 3295.95 1136.04 4036.27 00:20:25.211 ======================================================== 00:20:25.211 Total : 19417.80 75.85 3295.95 1136.04 4036.27 00:20:25.211 00:20:25.211 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwDglSf929 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hwDglSf929' 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3074302 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3074302 /var/tmp/bdevperf.sock 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3074302 ']' 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.472 [2024-05-13 20:33:41.209814] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:25.472 [2024-05-13 20:33:41.209887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074302 ] 00:20:25.472 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.472 [2024-05-13 20:33:41.266703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.472 [2024-05-13 20:33:41.318470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:25.472 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hwDglSf929 00:20:25.733 [2024-05-13 20:33:41.557916] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.733 [2024-05-13 20:33:41.557976] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.733 TLSTESTn1 00:20:25.733 20:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.994 Running I/O for 10 seconds... 00:20:35.996 00:20:35.996 Latency(us) 00:20:35.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.996 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.996 Verification LBA range: start 0x0 length 0x2000 00:20:35.996 TLSTESTn1 : 10.02 4769.27 18.63 0.00 0.00 26799.50 4587.52 49370.45 00:20:35.996 =================================================================================================================== 00:20:35.996 Total : 4769.27 18.63 0.00 0.00 26799.50 4587.52 49370.45 00:20:35.996 0 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3074302 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3074302 ']' 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3074302 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3074302 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3074302' 00:20:35.996 killing process with pid 3074302 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3074302 00:20:35.996 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.996 00:20:35.996 Latency(us) 00:20:35.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.996 =================================================================================================================== 00:20:35.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.996 [2024-05-13 20:33:51.822277] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3074302 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vQhwefbcxn 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vQhwefbcxn 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vQhwefbcxn 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vQhwefbcxn' 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3076416 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3076416 /var/tmp/bdevperf.sock 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3076416 ']' 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.996 20:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.257 [2024-05-13 20:33:51.956730] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:36.257 [2024-05-13 20:33:51.956776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076416 ] 00:20:36.257 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.257 [2024-05-13 20:33:52.002944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.257 [2024-05-13 20:33:52.054448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.257 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:36.257 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:36.257 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vQhwefbcxn 00:20:36.517 [2024-05-13 20:33:52.305816] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.517 [2024-05-13 20:33:52.305872] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.517 [2024-05-13 20:33:52.310241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.517 [2024-05-13 20:33:52.310859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392700 (107): Transport endpoint is not connected 00:20:36.517 [2024-05-13 20:33:52.311853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392700 (9): Bad file descriptor 00:20:36.517 [2024-05-13 20:33:52.312855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.517 [2024-05-13 20:33:52.312865] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.517 [2024-05-13 20:33:52.312871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.517 request: 00:20:36.517 { 00:20:36.517 "name": "TLSTEST", 00:20:36.517 "trtype": "tcp", 00:20:36.517 "traddr": "10.0.0.2", 00:20:36.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.517 "adrfam": "ipv4", 00:20:36.517 "trsvcid": "4420", 00:20:36.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.517 "psk": "/tmp/tmp.vQhwefbcxn", 00:20:36.517 "method": "bdev_nvme_attach_controller", 00:20:36.517 "req_id": 1 00:20:36.517 } 00:20:36.517 Got JSON-RPC error response 00:20:36.517 response: 00:20:36.517 { 00:20:36.517 "code": -32602, 00:20:36.517 "message": "Invalid parameters" 00:20:36.517 } 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3076416 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3076416 ']' 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3076416 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3076416 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:36.517 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:36.518 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3076416' 00:20:36.518 killing process with pid 3076416 00:20:36.518 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3076416 00:20:36.518 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.518 00:20:36.518 Latency(us) 00:20:36.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.518 =================================================================================================================== 00:20:36.518 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.518 [2024-05-13 20:33:52.380241] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.518 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3076416 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hwDglSf929 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hwDglSf929 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hwDglSf929 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hwDglSf929' 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3076645 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3076645 /var/tmp/bdevperf.sock 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3076645 ']' 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.778 [2024-05-13 20:33:52.506878] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:36.778 [2024-05-13 20:33:52.506924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076645 ] 00:20:36.778 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.778 [2024-05-13 20:33:52.552661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.778 [2024-05-13 20:33:52.604796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:36.778 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hwDglSf929 00:20:37.039 [2024-05-13 20:33:52.824261] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.039 [2024-05-13 20:33:52.824318] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.039 [2024-05-13 20:33:52.830494] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:37.039 [2024-05-13 20:33:52.830514] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:37.039 [2024-05-13 20:33:52.830534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.039 [2024-05-13 20:33:52.831434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162b700 (107): Transport endpoint is not connected 00:20:37.039 [2024-05-13 20:33:52.832429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162b700 (9): Bad file descriptor 00:20:37.039 [2024-05-13 20:33:52.833431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.039 [2024-05-13 20:33:52.833437] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.039 [2024-05-13 20:33:52.833444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.039 request: 00:20:37.039 { 00:20:37.039 "name": "TLSTEST", 00:20:37.039 "trtype": "tcp", 00:20:37.039 "traddr": "10.0.0.2", 00:20:37.039 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.039 "adrfam": "ipv4", 00:20:37.039 "trsvcid": "4420", 00:20:37.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.039 "psk": "/tmp/tmp.hwDglSf929", 00:20:37.039 "method": "bdev_nvme_attach_controller", 00:20:37.039 "req_id": 1 00:20:37.039 } 00:20:37.039 Got JSON-RPC error response 00:20:37.039 response: 00:20:37.039 { 00:20:37.039 "code": -32602, 00:20:37.039 "message": "Invalid parameters" 00:20:37.039 } 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3076645 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3076645 ']' 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3076645 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3076645 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3076645' 00:20:37.039 killing process with pid 3076645 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3076645 00:20:37.039 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.039 00:20:37.039 Latency(us) 00:20:37.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.039 =================================================================================================================== 00:20:37.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.039 [2024-05-13 20:33:52.900716] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.039 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3076645 00:20:37.300 20:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.300 20:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwDglSf929 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwDglSf929 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hwDglSf929 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hwDglSf929' 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3076658 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3076658 /var/tmp/bdevperf.sock 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3076658 ']' 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.300 [2024-05-13 20:33:53.025206] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:37.300 [2024-05-13 20:33:53.025251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076658 ] 00:20:37.300 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.300 [2024-05-13 20:33:53.071403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.300 [2024-05-13 20:33:53.122818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:37.300 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hwDglSf929 00:20:37.562 [2024-05-13 20:33:53.362208] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.562 [2024-05-13 20:33:53.362265] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.562 [2024-05-13 20:33:53.371805] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.562 [2024-05-13 20:33:53.371823] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.562 [2024-05-13 20:33:53.371843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.562 [2024-05-13 20:33:53.372160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd3700 (107): Transport endpoint is not connected 00:20:37.562 [2024-05-13 20:33:53.373156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd3700 (9): Bad file descriptor 00:20:37.562 [2024-05-13 20:33:53.374158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:37.562 [2024-05-13 20:33:53.374164] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.562 [2024-05-13 20:33:53.374170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:37.562 request: 00:20:37.562 { 00:20:37.562 "name": "TLSTEST", 00:20:37.562 "trtype": "tcp", 00:20:37.562 "traddr": "10.0.0.2", 00:20:37.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.562 "adrfam": "ipv4", 00:20:37.562 "trsvcid": "4420", 00:20:37.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.562 "psk": "/tmp/tmp.hwDglSf929", 00:20:37.562 "method": "bdev_nvme_attach_controller", 00:20:37.562 "req_id": 1 00:20:37.562 } 00:20:37.562 Got JSON-RPC error response 00:20:37.562 response: 00:20:37.562 { 00:20:37.562 "code": -32602, 00:20:37.562 "message": "Invalid parameters" 00:20:37.562 } 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3076658 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3076658 ']' 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3076658 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3076658 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3076658' 00:20:37.562 killing process with pid 3076658 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3076658 00:20:37.562 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.562 00:20:37.562 Latency(us) 00:20:37.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.562 =================================================================================================================== 00:20:37.562 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.562 [2024-05-13 20:33:53.442539] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.562 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3076658 00:20:37.838 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3076724 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3076724 /var/tmp/bdevperf.sock 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3076724 ']' 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.839 [2024-05-13 20:33:53.589642] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:37.839 [2024-05-13 20:33:53.589699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076724 ] 00:20:37.839 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.839 [2024-05-13 20:33:53.646821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.839 [2024-05-13 20:33:53.698402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:37.839 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:38.104 [2024-05-13 20:33:53.943201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.105 [2024-05-13 20:33:53.944625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddf080 (9): Bad file descriptor 00:20:38.105 [2024-05-13 20:33:53.945625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.105 [2024-05-13 20:33:53.945632] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.105 [2024-05-13 20:33:53.945639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.105 request: 00:20:38.105 { 00:20:38.105 "name": "TLSTEST", 00:20:38.105 "trtype": "tcp", 00:20:38.105 "traddr": "10.0.0.2", 00:20:38.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.105 "adrfam": "ipv4", 00:20:38.105 "trsvcid": "4420", 00:20:38.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.105 "method": "bdev_nvme_attach_controller", 00:20:38.105 "req_id": 1 00:20:38.105 } 00:20:38.105 Got JSON-RPC error response 00:20:38.105 response: 00:20:38.105 { 00:20:38.105 "code": -32602, 00:20:38.105 "message": "Invalid parameters" 00:20:38.105 } 00:20:38.105 20:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3076724 00:20:38.105 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3076724 ']' 00:20:38.105 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3076724 00:20:38.105 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:38.105 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:38.105 20:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3076724 00:20:38.105 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:38.105 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:38.105 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3076724' 00:20:38.105 killing process with pid 3076724 00:20:38.105 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3076724 00:20:38.105 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.105 00:20:38.105 Latency(us) 00:20:38.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.105 =================================================================================================================== 00:20:38.105 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.105 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3076724 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3071564 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3071564 ']' 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3071564 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3071564 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3071564' 00:20:38.366 killing process with pid 3071564 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3071564 00:20:38.366 [2024-05-13 20:33:54.147574] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:38.366 [2024-05-13 20:33:54.147597] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3071564 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.BjCdW02lfE 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.BjCdW02lfE 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3077018 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3077018 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3077018 ']' 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.366 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.626 [2024-05-13 20:33:54.337309] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:38.626 [2024-05-13 20:33:54.337365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.626 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.626 [2024-05-13 20:33:54.422026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.626 [2024-05-13 20:33:54.474497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.626 [2024-05-13 20:33:54.474533] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.626 [2024-05-13 20:33:54.474538] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.626 [2024-05-13 20:33:54.474543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.626 [2024-05-13 20:33:54.474546] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.626 [2024-05-13 20:33:54.474561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.626 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.626 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.626 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.626 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.626 20:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.887 20:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.887 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.BjCdW02lfE 00:20:38.887 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BjCdW02lfE 00:20:38.887 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.887 [2024-05-13 20:33:54.727005] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.887 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.149 20:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.149 [2024-05-13 20:33:55.007678] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:39.149 [2024-05-13 20:33:55.007714] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.149 [2024-05-13 20:33:55.007893] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.149 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.411 malloc0 00:20:39.411 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.411 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:20:39.671 [2024-05-13 20:33:55.422571] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BjCdW02lfE 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BjCdW02lfE' 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3077194 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3077194 /var/tmp/bdevperf.sock 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3077194 ']' 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.672 20:33:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.672 [2024-05-13 20:33:55.480044] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:39.672 [2024-05-13 20:33:55.480096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077194 ] 00:20:39.672 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.672 [2024-05-13 20:33:55.536343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.672 [2024-05-13 20:33:55.588419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.616 20:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:40.616 20:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:40.616 20:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:20:40.616 [2024-05-13 20:33:56.381123] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.616 [2024-05-13 20:33:56.381186] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.616 TLSTESTn1 00:20:40.616 20:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:40.616 Running I/O for 10 seconds... 00:20:52.855 00:20:52.855 Latency(us) 00:20:52.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:52.855 Verification LBA range: start 0x0 length 0x2000 00:20:52.855 TLSTESTn1 : 10.02 4838.80 18.90 0.00 0.00 26410.41 5816.32 70341.97 00:20:52.855 =================================================================================================================== 00:20:52.855 Total : 4838.80 18.90 0.00 0.00 26410.41 5816.32 70341.97 00:20:52.855 0 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3077194 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3077194 ']' 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3077194 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3077194 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3077194' 00:20:52.855 killing process with pid 3077194 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3077194 00:20:52.855 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.855 00:20:52.855 Latency(us) 00:20:52.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.855 =================================================================================================================== 00:20:52.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.855 [2024-05-13 20:34:06.658268] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3077194 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.BjCdW02lfE 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BjCdW02lfE 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:52.855 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BjCdW02lfE 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BjCdW02lfE 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BjCdW02lfE' 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3079390 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3079390 /var/tmp/bdevperf.sock 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3079390 ']' 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.856 20:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.856 [2024-05-13 20:34:06.824804] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:52.856 [2024-05-13 20:34:06.824858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079390 ] 00:20:52.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.856 [2024-05-13 20:34:06.880628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.856 [2024-05-13 20:34:06.931148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:20:52.856 [2024-05-13 20:34:07.732006] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.856 [2024-05-13 20:34:07.732050] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:52.856 [2024-05-13 20:34:07.732056] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.BjCdW02lfE 00:20:52.856 request: 00:20:52.856 { 00:20:52.856 "name": "TLSTEST", 00:20:52.856 "trtype": "tcp", 00:20:52.856 "traddr": "10.0.0.2", 00:20:52.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.856 "adrfam": "ipv4", 00:20:52.856 "trsvcid": "4420", 00:20:52.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.856 "psk": "/tmp/tmp.BjCdW02lfE", 00:20:52.856 "method": "bdev_nvme_attach_controller", 00:20:52.856 "req_id": 1 00:20:52.856 } 00:20:52.856 Got JSON-RPC error response 00:20:52.856 response: 00:20:52.856 { 00:20:52.856 "code": -1, 00:20:52.856 "message": "Operation not permitted" 00:20:52.856 } 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3079390 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3079390 ']' 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3079390 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3079390 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3079390' 00:20:52.856 killing process with pid 3079390 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3079390 00:20:52.856 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.856 00:20:52.856 Latency(us) 00:20:52.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.856 =================================================================================================================== 00:20:52.856 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3079390 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3077018 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3077018 ']' 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3077018 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3077018 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3077018' 00:20:52.856 killing process with pid 3077018 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3077018 00:20:52.856 [2024-05-13 20:34:07.963564] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:52.856 [2024-05-13 20:34:07.963603] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:52.856 20:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3077018 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3079732 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3079732 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3079732 ']' 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.856 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.856 [2024-05-13 20:34:08.137158] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:52.856 [2024-05-13 20:34:08.137209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.856 [2024-05-13 20:34:08.224540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.856 [2024-05-13 20:34:08.277782] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.856 [2024-05-13 20:34:08.277815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.856 [2024-05-13 20:34:08.277820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.856 [2024-05-13 20:34:08.277825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.856 [2024-05-13 20:34:08.277829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.856 [2024-05-13 20:34:08.277845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.BjCdW02lfE 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BjCdW02lfE 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.BjCdW02lfE 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BjCdW02lfE 00:20:53.118 20:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.379 [2024-05-13 20:34:09.083854] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.379 20:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.379 20:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.639 [2024-05-13 20:34:09.408630] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:53.639 [2024-05-13 20:34:09.408675] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.639 [2024-05-13 20:34:09.408848] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.639 20:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.901 malloc0 00:20:53.901 20:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.901 20:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:20:54.162 [2024-05-13 20:34:09.919952] tcp.c:3567:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.162 [2024-05-13 20:34:09.919975] tcp.c:3653:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:54.162 [2024-05-13 20:34:09.919993] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:54.162 request: 00:20:54.162 { 00:20:54.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.162 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.162 "psk": "/tmp/tmp.BjCdW02lfE", 00:20:54.162 "method": "nvmf_subsystem_add_host", 00:20:54.162 "req_id": 1 00:20:54.162 } 00:20:54.162 Got JSON-RPC error response 00:20:54.162 response: 00:20:54.162 { 00:20:54.162 "code": -32603, 00:20:54.162 "message": "Internal error" 00:20:54.162 } 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3079732 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3079732 ']' 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3079732 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3079732 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3079732' 00:20:54.162 killing process with pid 3079732 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3079732 00:20:54.162 [2024-05-13 20:34:09.986620] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:54.162 20:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3079732 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.BjCdW02lfE 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3080105 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3080105 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3080105 ']' 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.424 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.424 [2024-05-13 20:34:10.147394] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:54.424 [2024-05-13 20:34:10.147444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.424 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.424 [2024-05-13 20:34:10.233134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.424 [2024-05-13 20:34:10.285445] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.424 [2024-05-13 20:34:10.285477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.424 [2024-05-13 20:34:10.285483] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.424 [2024-05-13 20:34:10.285488] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.424 [2024-05-13 20:34:10.285492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.424 [2024-05-13 20:34:10.285506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.BjCdW02lfE 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BjCdW02lfE 00:20:55.368 20:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.368 [2024-05-13 20:34:11.123407] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.368 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.368 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.629 [2024-05-13 20:34:11.424123] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:55.629 [2024-05-13 20:34:11.424162] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.629 [2024-05-13 20:34:11.424354] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.629 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.890 malloc0 00:20:55.890 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.890 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:20:56.189 [2024-05-13 20:34:11.878931] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3080468 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3080468 /var/tmp/bdevperf.sock 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3080468 ']' 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:56.189 20:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.189 [2024-05-13 20:34:11.922492] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:56.189 [2024-05-13 20:34:11.922542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080468 ] 00:20:56.189 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.189 [2024-05-13 20:34:11.979253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.189 [2024-05-13 20:34:12.030634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.190 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:56.190 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:56.190 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:20:56.478 [2024-05-13 20:34:12.237914] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.478 [2024-05-13 20:34:12.237978] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:56.478 TLSTESTn1 00:20:56.478 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:56.740 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:56.740 "subsystems": [ 00:20:56.740 { 00:20:56.740 "subsystem": "keyring", 00:20:56.740 "config": [] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "iobuf", 00:20:56.740 "config": [ 00:20:56.740 { 00:20:56.740 "method": "iobuf_set_options", 00:20:56.740 "params": { 00:20:56.740 "small_pool_count": 8192, 00:20:56.740 "large_pool_count": 1024, 00:20:56.740 "small_bufsize": 8192, 00:20:56.740 "large_bufsize": 135168 00:20:56.740 } 00:20:56.740 } 00:20:56.740 ] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "sock", 00:20:56.740 "config": [ 00:20:56.740 { 00:20:56.740 "method": "sock_impl_set_options", 00:20:56.740 "params": { 00:20:56.740 "impl_name": "posix", 00:20:56.740 "recv_buf_size": 2097152, 00:20:56.740 "send_buf_size": 2097152, 00:20:56.740 "enable_recv_pipe": true, 00:20:56.740 "enable_quickack": false, 00:20:56.740 "enable_placement_id": 0, 00:20:56.740 "enable_zerocopy_send_server": true, 00:20:56.740 "enable_zerocopy_send_client": false, 00:20:56.740 "zerocopy_threshold": 0, 00:20:56.740 "tls_version": 0, 00:20:56.740 "enable_ktls": false 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "sock_impl_set_options", 00:20:56.740 "params": { 00:20:56.740 "impl_name": "ssl", 00:20:56.740 "recv_buf_size": 4096, 00:20:56.740 "send_buf_size": 4096, 00:20:56.740 "enable_recv_pipe": true, 00:20:56.740 "enable_quickack": false, 00:20:56.740 "enable_placement_id": 0, 00:20:56.740 "enable_zerocopy_send_server": true, 00:20:56.740 "enable_zerocopy_send_client": false, 00:20:56.740 "zerocopy_threshold": 0, 00:20:56.740 "tls_version": 0, 00:20:56.740 "enable_ktls": false 00:20:56.740 } 00:20:56.740 } 00:20:56.740 ] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "vmd", 00:20:56.740 "config": [] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "accel", 00:20:56.740 "config": [ 00:20:56.740 { 00:20:56.740 "method": "accel_set_options", 00:20:56.740 "params": { 00:20:56.740 "small_cache_size": 128, 00:20:56.740 "large_cache_size": 16, 00:20:56.740 "task_count": 2048, 00:20:56.740 "sequence_count": 2048, 00:20:56.740 "buf_count": 2048 00:20:56.740 } 00:20:56.740 } 00:20:56.740 ] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "bdev", 00:20:56.740 "config": [ 00:20:56.740 { 00:20:56.740 "method": "bdev_set_options", 00:20:56.740 "params": { 00:20:56.740 "bdev_io_pool_size": 65535, 00:20:56.740 "bdev_io_cache_size": 256, 00:20:56.740 "bdev_auto_examine": true, 00:20:56.740 "iobuf_small_cache_size": 128, 00:20:56.740 "iobuf_large_cache_size": 16 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "bdev_raid_set_options", 00:20:56.740 "params": { 00:20:56.740 "process_window_size_kb": 1024 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "bdev_iscsi_set_options", 00:20:56.740 "params": { 00:20:56.740 "timeout_sec": 30 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "bdev_nvme_set_options", 00:20:56.740 "params": { 00:20:56.740 "action_on_timeout": "none", 00:20:56.740 "timeout_us": 0, 00:20:56.740 "timeout_admin_us": 0, 00:20:56.740 "keep_alive_timeout_ms": 10000, 00:20:56.740 "arbitration_burst": 0, 00:20:56.740 "low_priority_weight": 0, 00:20:56.740 "medium_priority_weight": 0, 00:20:56.740 "high_priority_weight": 0, 00:20:56.740 "nvme_adminq_poll_period_us": 10000, 00:20:56.740 "nvme_ioq_poll_period_us": 0, 00:20:56.740 "io_queue_requests": 0, 00:20:56.740 "delay_cmd_submit": true, 00:20:56.740 "transport_retry_count": 4, 00:20:56.740 "bdev_retry_count": 3, 00:20:56.740 "transport_ack_timeout": 0, 00:20:56.740 "ctrlr_loss_timeout_sec": 0, 00:20:56.740 "reconnect_delay_sec": 0, 00:20:56.740 "fast_io_fail_timeout_sec": 0, 00:20:56.740 "disable_auto_failback": false, 00:20:56.740 "generate_uuids": false, 00:20:56.740 "transport_tos": 0, 00:20:56.740 "nvme_error_stat": false, 00:20:56.740 "rdma_srq_size": 0, 00:20:56.740 "io_path_stat": false, 00:20:56.740 "allow_accel_sequence": false, 00:20:56.740 "rdma_max_cq_size": 0, 00:20:56.740 "rdma_cm_event_timeout_ms": 0, 00:20:56.740 "dhchap_digests": [ 00:20:56.740 "sha256", 00:20:56.740 "sha384", 00:20:56.740 "sha512" 00:20:56.740 ], 00:20:56.740 "dhchap_dhgroups": [ 00:20:56.740 "null", 00:20:56.740 "ffdhe2048", 00:20:56.740 "ffdhe3072", 00:20:56.740 "ffdhe4096", 00:20:56.740 "ffdhe6144", 00:20:56.740 "ffdhe8192" 00:20:56.740 ] 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "bdev_nvme_set_hotplug", 00:20:56.740 "params": { 00:20:56.740 "period_us": 100000, 00:20:56.740 "enable": false 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "bdev_malloc_create", 00:20:56.740 "params": { 00:20:56.740 "name": "malloc0", 00:20:56.740 "num_blocks": 8192, 00:20:56.740 "block_size": 4096, 00:20:56.740 "physical_block_size": 4096, 00:20:56.740 "uuid": "cc779c50-6ea9-4b72-9f1a-4e1401bf3992", 00:20:56.740 "optimal_io_boundary": 0 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "bdev_wait_for_examine" 00:20:56.740 } 00:20:56.740 ] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "nbd", 00:20:56.740 "config": [] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "scheduler", 00:20:56.740 "config": [ 00:20:56.740 { 00:20:56.740 "method": "framework_set_scheduler", 00:20:56.740 "params": { 00:20:56.740 "name": "static" 00:20:56.740 } 00:20:56.740 } 00:20:56.740 ] 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "subsystem": "nvmf", 00:20:56.740 "config": [ 00:20:56.740 { 00:20:56.740 "method": "nvmf_set_config", 00:20:56.740 "params": { 00:20:56.740 "discovery_filter": "match_any", 00:20:56.740 "admin_cmd_passthru": { 00:20:56.740 "identify_ctrlr": false 00:20:56.740 } 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "nvmf_set_max_subsystems", 00:20:56.740 "params": { 00:20:56.740 "max_subsystems": 1024 00:20:56.740 } 00:20:56.740 }, 00:20:56.740 { 00:20:56.740 "method": "nvmf_set_crdt", 00:20:56.740 "params": { 00:20:56.740 "crdt1": 0, 00:20:56.740 "crdt2": 0, 00:20:56.740 "crdt3": 0 00:20:56.741 } 00:20:56.741 }, 00:20:56.741 { 00:20:56.741 "method": "nvmf_create_transport", 00:20:56.741 "params": { 00:20:56.741 "trtype": "TCP", 00:20:56.741 "max_queue_depth": 128, 00:20:56.741 "max_io_qpairs_per_ctrlr": 127, 00:20:56.741 "in_capsule_data_size": 4096, 00:20:56.741 "max_io_size": 131072, 00:20:56.741 "io_unit_size": 131072, 00:20:56.741 "max_aq_depth": 128, 00:20:56.741 "num_shared_buffers": 511, 00:20:56.741 "buf_cache_size": 4294967295, 00:20:56.741 "dif_insert_or_strip": false, 00:20:56.741 "zcopy": false, 00:20:56.741 "c2h_success": false, 00:20:56.741 "sock_priority": 0, 00:20:56.741 "abort_timeout_sec": 1, 00:20:56.741 "ack_timeout": 0, 00:20:56.741 "data_wr_pool_size": 0 00:20:56.741 } 00:20:56.741 }, 00:20:56.741 { 00:20:56.741 "method": "nvmf_create_subsystem", 00:20:56.741 "params": { 00:20:56.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.741 "allow_any_host": false, 00:20:56.741 "serial_number": "SPDK00000000000001", 00:20:56.741 "model_number": "SPDK bdev Controller", 00:20:56.741 "max_namespaces": 10, 00:20:56.741 "min_cntlid": 1, 00:20:56.741 "max_cntlid": 65519, 00:20:56.741 "ana_reporting": false 00:20:56.741 } 00:20:56.741 }, 00:20:56.741 { 00:20:56.741 "method": "nvmf_subsystem_add_host", 00:20:56.741 "params": { 00:20:56.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.741 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.741 "psk": "/tmp/tmp.BjCdW02lfE" 00:20:56.741 } 00:20:56.741 }, 00:20:56.741 { 00:20:56.741 "method": "nvmf_subsystem_add_ns", 00:20:56.741 "params": { 00:20:56.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.741 "namespace": { 00:20:56.741 "nsid": 1, 00:20:56.741 "bdev_name": "malloc0", 00:20:56.741 "nguid": "CC779C506EA94B729F1A4E1401BF3992", 00:20:56.741 "uuid": "cc779c50-6ea9-4b72-9f1a-4e1401bf3992", 00:20:56.741 "no_auto_visible": false 00:20:56.741 } 00:20:56.741 } 00:20:56.741 }, 00:20:56.741 { 00:20:56.741 "method": "nvmf_subsystem_add_listener", 00:20:56.741 "params": { 00:20:56.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.741 "listen_address": { 00:20:56.741 "trtype": "TCP", 00:20:56.741 "adrfam": "IPv4", 00:20:56.741 "traddr": "10.0.0.2", 00:20:56.741 "trsvcid": "4420" 00:20:56.741 }, 00:20:56.741 "secure_channel": true 00:20:56.741 } 00:20:56.741 } 00:20:56.741 ] 00:20:56.741 } 00:20:56.741 ] 00:20:56.741 }' 00:20:56.741 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:57.003 "subsystems": [ 00:20:57.003 { 00:20:57.003 "subsystem": "keyring", 00:20:57.003 "config": [] 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "subsystem": "iobuf", 00:20:57.003 "config": [ 00:20:57.003 { 00:20:57.003 "method": "iobuf_set_options", 00:20:57.003 "params": { 00:20:57.003 "small_pool_count": 8192, 00:20:57.003 "large_pool_count": 1024, 00:20:57.003 "small_bufsize": 8192, 00:20:57.003 "large_bufsize": 135168 00:20:57.003 } 00:20:57.003 } 00:20:57.003 ] 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "subsystem": "sock", 00:20:57.003 "config": [ 00:20:57.003 { 00:20:57.003 "method": "sock_impl_set_options", 00:20:57.003 "params": { 00:20:57.003 "impl_name": "posix", 00:20:57.003 "recv_buf_size": 2097152, 00:20:57.003 "send_buf_size": 2097152, 00:20:57.003 "enable_recv_pipe": true, 00:20:57.003 "enable_quickack": false, 00:20:57.003 "enable_placement_id": 0, 00:20:57.003 "enable_zerocopy_send_server": true, 00:20:57.003 "enable_zerocopy_send_client": false, 00:20:57.003 "zerocopy_threshold": 0, 00:20:57.003 "tls_version": 0, 00:20:57.003 "enable_ktls": false 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "sock_impl_set_options", 00:20:57.003 "params": { 00:20:57.003 "impl_name": "ssl", 00:20:57.003 "recv_buf_size": 4096, 00:20:57.003 "send_buf_size": 4096, 00:20:57.003 "enable_recv_pipe": true, 00:20:57.003 "enable_quickack": false, 00:20:57.003 "enable_placement_id": 0, 00:20:57.003 "enable_zerocopy_send_server": true, 00:20:57.003 "enable_zerocopy_send_client": false, 00:20:57.003 "zerocopy_threshold": 0, 00:20:57.003 "tls_version": 0, 00:20:57.003 "enable_ktls": false 00:20:57.003 } 00:20:57.003 } 00:20:57.003 ] 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "subsystem": "vmd", 00:20:57.003 "config": [] 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "subsystem": "accel", 00:20:57.003 "config": [ 00:20:57.003 { 00:20:57.003 "method": "accel_set_options", 00:20:57.003 "params": { 00:20:57.003 "small_cache_size": 128, 00:20:57.003 "large_cache_size": 16, 00:20:57.003 "task_count": 2048, 00:20:57.003 "sequence_count": 2048, 00:20:57.003 "buf_count": 2048 00:20:57.003 } 00:20:57.003 } 00:20:57.003 ] 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "subsystem": "bdev", 00:20:57.003 "config": [ 00:20:57.003 { 00:20:57.003 "method": "bdev_set_options", 00:20:57.003 "params": { 00:20:57.003 "bdev_io_pool_size": 65535, 00:20:57.003 "bdev_io_cache_size": 256, 00:20:57.003 "bdev_auto_examine": true, 00:20:57.003 "iobuf_small_cache_size": 128, 00:20:57.003 "iobuf_large_cache_size": 16 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "bdev_raid_set_options", 00:20:57.003 "params": { 00:20:57.003 "process_window_size_kb": 1024 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "bdev_iscsi_set_options", 00:20:57.003 "params": { 00:20:57.003 "timeout_sec": 30 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "bdev_nvme_set_options", 00:20:57.003 "params": { 00:20:57.003 "action_on_timeout": "none", 00:20:57.003 "timeout_us": 0, 00:20:57.003 "timeout_admin_us": 0, 00:20:57.003 "keep_alive_timeout_ms": 10000, 00:20:57.003 "arbitration_burst": 0, 00:20:57.003 "low_priority_weight": 0, 00:20:57.003 "medium_priority_weight": 0, 00:20:57.003 "high_priority_weight": 0, 00:20:57.003 "nvme_adminq_poll_period_us": 10000, 00:20:57.003 "nvme_ioq_poll_period_us": 0, 00:20:57.003 "io_queue_requests": 512, 00:20:57.003 "delay_cmd_submit": true, 00:20:57.003 "transport_retry_count": 4, 00:20:57.003 "bdev_retry_count": 3, 00:20:57.003 "transport_ack_timeout": 0, 00:20:57.003 "ctrlr_loss_timeout_sec": 0, 00:20:57.003 "reconnect_delay_sec": 0, 00:20:57.003 "fast_io_fail_timeout_sec": 0, 00:20:57.003 "disable_auto_failback": false, 00:20:57.003 "generate_uuids": false, 00:20:57.003 "transport_tos": 0, 00:20:57.003 "nvme_error_stat": false, 00:20:57.003 "rdma_srq_size": 0, 00:20:57.003 "io_path_stat": false, 00:20:57.003 "allow_accel_sequence": false, 00:20:57.003 "rdma_max_cq_size": 0, 00:20:57.003 "rdma_cm_event_timeout_ms": 0, 00:20:57.003 "dhchap_digests": [ 00:20:57.003 "sha256", 00:20:57.003 "sha384", 00:20:57.003 "sha512" 00:20:57.003 ], 00:20:57.003 "dhchap_dhgroups": [ 00:20:57.003 "null", 00:20:57.003 "ffdhe2048", 00:20:57.003 "ffdhe3072", 00:20:57.003 "ffdhe4096", 00:20:57.003 "ffdhe6144", 00:20:57.003 "ffdhe8192" 00:20:57.003 ] 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "bdev_nvme_attach_controller", 00:20:57.003 "params": { 00:20:57.003 "name": "TLSTEST", 00:20:57.003 "trtype": "TCP", 00:20:57.003 "adrfam": "IPv4", 00:20:57.003 "traddr": "10.0.0.2", 00:20:57.003 "trsvcid": "4420", 00:20:57.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.003 "prchk_reftag": false, 00:20:57.003 "prchk_guard": false, 00:20:57.003 "ctrlr_loss_timeout_sec": 0, 00:20:57.003 "reconnect_delay_sec": 0, 00:20:57.003 "fast_io_fail_timeout_sec": 0, 00:20:57.003 "psk": "/tmp/tmp.BjCdW02lfE", 00:20:57.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.003 "hdgst": false, 00:20:57.003 "ddgst": false 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "bdev_nvme_set_hotplug", 00:20:57.003 "params": { 00:20:57.003 "period_us": 100000, 00:20:57.003 "enable": false 00:20:57.003 } 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "method": "bdev_wait_for_examine" 00:20:57.003 } 00:20:57.003 ] 00:20:57.003 }, 00:20:57.003 { 00:20:57.003 "subsystem": "nbd", 00:20:57.003 "config": [] 00:20:57.003 } 00:20:57.003 ] 00:20:57.003 }' 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3080468 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3080468 ']' 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3080468 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3080468 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3080468' 00:20:57.003 killing process with pid 3080468 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3080468 00:20:57.003 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.003 00:20:57.003 Latency(us) 00:20:57.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.003 =================================================================================================================== 00:20:57.003 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.003 [2024-05-13 20:34:12.849772] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.003 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3080468 00:20:57.265 20:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3080105 00:20:57.265 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3080105 ']' 00:20:57.265 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3080105 00:20:57.265 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:57.265 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:57.265 20:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3080105 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3080105' 00:20:57.265 killing process with pid 3080105 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3080105 00:20:57.265 [2024-05-13 20:34:13.019086] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:57.265 [2024-05-13 20:34:13.019116] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3080105 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.265 20:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:57.265 "subsystems": [ 00:20:57.265 { 00:20:57.265 "subsystem": "keyring", 00:20:57.265 "config": [] 00:20:57.265 }, 00:20:57.265 { 00:20:57.265 "subsystem": "iobuf", 00:20:57.265 "config": [ 00:20:57.265 { 00:20:57.265 "method": "iobuf_set_options", 00:20:57.265 "params": { 00:20:57.265 "small_pool_count": 8192, 00:20:57.265 "large_pool_count": 1024, 00:20:57.265 "small_bufsize": 8192, 00:20:57.265 "large_bufsize": 135168 00:20:57.265 } 00:20:57.265 } 00:20:57.265 ] 00:20:57.265 }, 00:20:57.265 { 00:20:57.265 "subsystem": "sock", 00:20:57.265 "config": [ 00:20:57.265 { 00:20:57.265 "method": "sock_impl_set_options", 00:20:57.265 "params": { 00:20:57.265 "impl_name": "posix", 00:20:57.265 "recv_buf_size": 2097152, 00:20:57.265 "send_buf_size": 2097152, 00:20:57.265 "enable_recv_pipe": true, 00:20:57.265 "enable_quickack": false, 00:20:57.265 "enable_placement_id": 0, 00:20:57.265 "enable_zerocopy_send_server": true, 00:20:57.265 "enable_zerocopy_send_client": false, 00:20:57.265 "zerocopy_threshold": 0, 00:20:57.265 "tls_version": 0, 00:20:57.265 "enable_ktls": false 00:20:57.265 } 00:20:57.265 }, 00:20:57.265 { 00:20:57.265 "method": "sock_impl_set_options", 00:20:57.265 "params": { 00:20:57.265 "impl_name": "ssl", 00:20:57.265 "recv_buf_size": 4096, 00:20:57.265 "send_buf_size": 4096, 00:20:57.265 "enable_recv_pipe": true, 00:20:57.265 "enable_quickack": false, 00:20:57.265 "enable_placement_id": 0, 00:20:57.265 "enable_zerocopy_send_server": true, 00:20:57.265 "enable_zerocopy_send_client": false, 00:20:57.265 "zerocopy_threshold": 0, 00:20:57.265 "tls_version": 0, 00:20:57.265 "enable_ktls": false 00:20:57.265 } 00:20:57.265 } 00:20:57.265 ] 00:20:57.265 }, 00:20:57.265 { 00:20:57.265 "subsystem": "vmd", 00:20:57.265 "config": [] 00:20:57.265 }, 00:20:57.265 { 00:20:57.265 "subsystem": "accel", 00:20:57.265 "config": [ 00:20:57.265 { 00:20:57.265 "method": "accel_set_options", 00:20:57.265 "params": { 00:20:57.265 "small_cache_size": 128, 00:20:57.265 "large_cache_size": 16, 00:20:57.265 "task_count": 2048, 00:20:57.265 "sequence_count": 2048, 00:20:57.265 "buf_count": 2048 00:20:57.265 } 00:20:57.265 } 00:20:57.265 ] 00:20:57.265 }, 00:20:57.265 { 00:20:57.265 "subsystem": "bdev", 00:20:57.265 "config": [ 00:20:57.265 { 00:20:57.265 "method": "bdev_set_options", 00:20:57.265 "params": { 00:20:57.265 "bdev_io_pool_size": 65535, 00:20:57.265 "bdev_io_cache_size": 256, 00:20:57.266 "bdev_auto_examine": true, 00:20:57.266 "iobuf_small_cache_size": 128, 00:20:57.266 "iobuf_large_cache_size": 16 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "bdev_raid_set_options", 00:20:57.266 "params": { 00:20:57.266 "process_window_size_kb": 1024 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "bdev_iscsi_set_options", 00:20:57.266 "params": { 00:20:57.266 "timeout_sec": 30 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "bdev_nvme_set_options", 00:20:57.266 "params": { 00:20:57.266 "action_on_timeout": "none", 00:20:57.266 "timeout_us": 0, 00:20:57.266 "timeout_admin_us": 0, 00:20:57.266 "keep_alive_timeout_ms": 10000, 00:20:57.266 "arbitration_burst": 0, 00:20:57.266 "low_priority_weight": 0, 00:20:57.266 "medium_priority_weight": 0, 00:20:57.266 "high_priority_weight": 0, 00:20:57.266 "nvme_adminq_poll_period_us": 10000, 00:20:57.266 "nvme_ioq_poll_period_us": 0, 00:20:57.266 "io_queue_requests": 0, 00:20:57.266 "delay_cmd_submit": true, 00:20:57.266 "transport_retry_count": 4, 00:20:57.266 "bdev_retry_count": 3, 00:20:57.266 "transport_ack_timeout": 0, 00:20:57.266 "ctrlr_loss_timeout_sec": 0, 00:20:57.266 "reconnect_delay_sec": 0, 00:20:57.266 "fast_io_fail_timeout_sec": 0, 00:20:57.266 "disable_auto_failback": false, 00:20:57.266 "generate_uuids": false, 00:20:57.266 "transport_tos": 0, 00:20:57.266 "nvme_error_stat": false, 00:20:57.266 "rdma_srq_size": 0, 00:20:57.266 "io_path_stat": false, 00:20:57.266 "allow_accel_sequence": false, 00:20:57.266 "rdma_max_cq_size": 0, 00:20:57.266 "rdma_cm_event_timeout_ms": 0, 00:20:57.266 "dhchap_digests": [ 00:20:57.266 "sha256", 00:20:57.266 "sha384", 00:20:57.266 "sha512" 00:20:57.266 ], 00:20:57.266 "dhchap_dhgroups": [ 00:20:57.266 "null", 00:20:57.266 "ffdhe2048", 00:20:57.266 "ffdhe3072", 00:20:57.266 "ffdhe4096", 00:20:57.266 "ffdhe6144", 00:20:57.266 "ffdhe8192" 00:20:57.266 ] 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "bdev_nvme_set_hotplug", 00:20:57.266 "params": { 00:20:57.266 "period_us": 100000, 00:20:57.266 "enable": false 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "bdev_malloc_create", 00:20:57.266 "params": { 00:20:57.266 "name": "malloc0", 00:20:57.266 "num_blocks": 8192, 00:20:57.266 "block_size": 4096, 00:20:57.266 "physical_block_size": 4096, 00:20:57.266 "uuid": "cc779c50-6ea9-4b72-9f1a-4e1401bf3992", 00:20:57.266 "optimal_io_boundary": 0 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "bdev_wait_for_examine" 00:20:57.266 } 00:20:57.266 ] 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "subsystem": "nbd", 00:20:57.266 "config": [] 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "subsystem": "scheduler", 00:20:57.266 "config": [ 00:20:57.266 { 00:20:57.266 "method": "framework_set_scheduler", 00:20:57.266 "params": { 00:20:57.266 "name": "static" 00:20:57.266 } 00:20:57.266 } 00:20:57.266 ] 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "subsystem": "nvmf", 00:20:57.266 "config": [ 00:20:57.266 { 00:20:57.266 "method": "nvmf_set_config", 00:20:57.266 "params": { 00:20:57.266 "discovery_filter": "match_any", 00:20:57.266 "admin_cmd_passthru": { 00:20:57.266 "identify_ctrlr": false 00:20:57.266 } 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_set_max_subsystems", 00:20:57.266 "params": { 00:20:57.266 "max_subsystems": 1024 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_set_crdt", 00:20:57.266 "params": { 00:20:57.266 "crdt1": 0, 00:20:57.266 "crdt2": 0, 00:20:57.266 "crdt3": 0 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_create_transport", 00:20:57.266 "params": { 00:20:57.266 "trtype": "TCP", 00:20:57.266 "max_queue_depth": 128, 00:20:57.266 "max_io_qpairs_per_ctrlr": 127, 00:20:57.266 "in_capsule_data_size": 4096, 00:20:57.266 "max_io_size": 131072, 00:20:57.266 "io_unit_size": 131072, 00:20:57.266 "max_aq_depth": 128, 00:20:57.266 "num_shared_buffers": 511, 00:20:57.266 "buf_cache_size": 4294967295, 00:20:57.266 "dif_insert_or_strip": false, 00:20:57.266 "zcopy": false, 00:20:57.266 "c2h_success": false, 00:20:57.266 "sock_priority": 0, 00:20:57.266 "abort_timeout_sec": 1, 00:20:57.266 "ack_timeout": 0, 00:20:57.266 "data_wr_pool_size": 0 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_create_subsystem", 00:20:57.266 "params": { 00:20:57.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.266 "allow_any_host": false, 00:20:57.266 "serial_number": "SPDK00000000000001", 00:20:57.266 "model_number": "SPDK bdev Controller", 00:20:57.266 "max_namespaces": 10, 00:20:57.266 "min_cntlid": 1, 00:20:57.266 "max_cntlid": 65519, 00:20:57.266 "ana_reporting": false 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_subsystem_add_host", 00:20:57.266 "params": { 00:20:57.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.266 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.266 "psk": "/tmp/tmp.BjCdW02lfE" 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_subsystem_add_ns", 00:20:57.266 "params": { 00:20:57.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.266 "namespace": { 00:20:57.266 "nsid": 1, 00:20:57.266 "bdev_name": "malloc0", 00:20:57.266 "nguid": "CC779C506EA94B729F1A4E1401BF3992", 00:20:57.266 "uuid": "cc779c50-6ea9-4b72-9f1a-4e1401bf3992", 00:20:57.266 "no_auto_visible": false 00:20:57.266 } 00:20:57.266 } 00:20:57.266 }, 00:20:57.266 { 00:20:57.266 "method": "nvmf_subsystem_add_listener", 00:20:57.266 "params": { 00:20:57.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.266 "listen_address": { 00:20:57.266 "trtype": "TCP", 00:20:57.266 "adrfam": "IPv4", 00:20:57.266 "traddr": "10.0.0.2", 00:20:57.266 "trsvcid": "4420" 00:20:57.266 }, 00:20:57.266 "secure_channel": true 00:20:57.266 } 00:20:57.266 } 00:20:57.266 ] 00:20:57.266 } 00:20:57.266 ] 00:20:57.266 }' 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3080818 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3080818 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3080818 ']' 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:57.266 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.266 [2024-05-13 20:34:13.190973] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:57.266 [2024-05-13 20:34:13.191028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.528 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.528 [2024-05-13 20:34:13.276227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.528 [2024-05-13 20:34:13.329299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.528 [2024-05-13 20:34:13.329336] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.528 [2024-05-13 20:34:13.329344] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.528 [2024-05-13 20:34:13.329349] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.528 [2024-05-13 20:34:13.329353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.528 [2024-05-13 20:34:13.329399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.789 [2024-05-13 20:34:13.504684] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.789 [2024-05-13 20:34:13.520654] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.789 [2024-05-13 20:34:13.536689] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:57.789 [2024-05-13 20:34:13.536725] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.789 [2024-05-13 20:34:13.550637] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.050 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:58.050 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:58.050 20:34:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.050 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.050 20:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3080848 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3080848 /var/tmp/bdevperf.sock 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3080848 ']' 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.311 20:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:58.311 "subsystems": [ 00:20:58.311 { 00:20:58.311 "subsystem": "keyring", 00:20:58.311 "config": [] 00:20:58.311 }, 00:20:58.311 { 00:20:58.311 "subsystem": "iobuf", 00:20:58.311 "config": [ 00:20:58.311 { 00:20:58.311 "method": "iobuf_set_options", 00:20:58.311 "params": { 00:20:58.311 "small_pool_count": 8192, 00:20:58.311 "large_pool_count": 1024, 00:20:58.311 "small_bufsize": 8192, 00:20:58.311 "large_bufsize": 135168 00:20:58.311 } 00:20:58.311 } 00:20:58.311 ] 00:20:58.311 }, 00:20:58.311 { 00:20:58.311 "subsystem": "sock", 00:20:58.311 "config": [ 00:20:58.311 { 00:20:58.311 "method": "sock_impl_set_options", 00:20:58.311 "params": { 00:20:58.311 "impl_name": "posix", 00:20:58.311 "recv_buf_size": 2097152, 00:20:58.311 "send_buf_size": 2097152, 00:20:58.311 "enable_recv_pipe": true, 00:20:58.311 "enable_quickack": false, 00:20:58.311 "enable_placement_id": 0, 00:20:58.311 "enable_zerocopy_send_server": true, 00:20:58.311 "enable_zerocopy_send_client": false, 00:20:58.311 "zerocopy_threshold": 0, 00:20:58.311 "tls_version": 0, 00:20:58.311 "enable_ktls": false 00:20:58.311 } 00:20:58.311 }, 00:20:58.311 { 00:20:58.311 "method": "sock_impl_set_options", 00:20:58.311 "params": { 00:20:58.311 "impl_name": "ssl", 00:20:58.311 "recv_buf_size": 4096, 00:20:58.311 "send_buf_size": 4096, 00:20:58.311 "enable_recv_pipe": true, 00:20:58.311 "enable_quickack": false, 00:20:58.311 "enable_placement_id": 0, 00:20:58.311 "enable_zerocopy_send_server": true, 00:20:58.311 "enable_zerocopy_send_client": false, 00:20:58.311 "zerocopy_threshold": 0, 00:20:58.311 "tls_version": 0, 00:20:58.311 "enable_ktls": false 00:20:58.311 } 00:20:58.311 } 00:20:58.311 ] 00:20:58.311 }, 00:20:58.311 { 00:20:58.311 "subsystem": "vmd", 00:20:58.311 "config": [] 00:20:58.311 }, 00:20:58.311 { 00:20:58.311 "subsystem": "accel", 00:20:58.311 "config": [ 00:20:58.311 { 00:20:58.311 "method": "accel_set_options", 00:20:58.311 "params": { 00:20:58.311 "small_cache_size": 128, 00:20:58.311 "large_cache_size": 16, 00:20:58.312 "task_count": 2048, 00:20:58.312 "sequence_count": 2048, 00:20:58.312 "buf_count": 2048 00:20:58.312 } 00:20:58.312 } 00:20:58.312 ] 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "subsystem": "bdev", 00:20:58.312 "config": [ 00:20:58.312 { 00:20:58.312 "method": "bdev_set_options", 00:20:58.312 "params": { 00:20:58.312 "bdev_io_pool_size": 65535, 00:20:58.312 "bdev_io_cache_size": 256, 00:20:58.312 "bdev_auto_examine": true, 00:20:58.312 "iobuf_small_cache_size": 128, 00:20:58.312 "iobuf_large_cache_size": 16 00:20:58.312 } 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "method": "bdev_raid_set_options", 00:20:58.312 "params": { 00:20:58.312 "process_window_size_kb": 1024 00:20:58.312 } 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "method": "bdev_iscsi_set_options", 00:20:58.312 "params": { 00:20:58.312 "timeout_sec": 30 00:20:58.312 } 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "method": "bdev_nvme_set_options", 00:20:58.312 "params": { 00:20:58.312 "action_on_timeout": "none", 00:20:58.312 "timeout_us": 0, 00:20:58.312 "timeout_admin_us": 0, 00:20:58.312 "keep_alive_timeout_ms": 10000, 00:20:58.312 "arbitration_burst": 0, 00:20:58.312 "low_priority_weight": 0, 00:20:58.312 "medium_priority_weight": 0, 00:20:58.312 "high_priority_weight": 0, 00:20:58.312 "nvme_adminq_poll_period_us": 10000, 00:20:58.312 "nvme_ioq_poll_period_us": 0, 00:20:58.312 "io_queue_requests": 512, 00:20:58.312 "delay_cmd_submit": true, 00:20:58.312 "transport_retry_count": 4, 00:20:58.312 "bdev_retry_count": 3, 00:20:58.312 "transport_ack_timeout": 0, 00:20:58.312 "ctrlr_loss_timeout_sec": 0, 00:20:58.312 "reconnect_delay_sec": 0, 00:20:58.312 "fast_io_fail_timeout_sec": 0, 00:20:58.312 "disable_auto_failback": false, 00:20:58.312 "generate_uuids": false, 00:20:58.312 "transport_tos": 0, 00:20:58.312 "nvme_error_stat": false, 00:20:58.312 "rdma_srq_size": 0, 00:20:58.312 "io_path_stat": false, 00:20:58.312 "allow_accel_sequence": false, 00:20:58.312 "rdma_max_cq_size": 0, 00:20:58.312 "rdma_cm_event_timeout_ms": 0, 00:20:58.312 "dhchap_digests": [ 00:20:58.312 "sha256", 00:20:58.312 "sha384", 00:20:58.312 "sha512" 00:20:58.312 ], 00:20:58.312 "dhchap_dhgroups": [ 00:20:58.312 "null", 00:20:58.312 "ffdhe2048", 00:20:58.312 "ffdhe3072", 00:20:58.312 "ffdhe4096", 00:20:58.312 "ffdhe6144", 00:20:58.312 "ffdhe8192" 00:20:58.312 ] 00:20:58.312 } 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "method": "bdev_nvme_attach_controller", 00:20:58.312 "params": { 00:20:58.312 "name": "TLSTEST", 00:20:58.312 "trtype": "TCP", 00:20:58.312 "adrfam": "IPv4", 00:20:58.312 "traddr": "10.0.0.2", 00:20:58.312 "trsvcid": "4420", 00:20:58.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.312 "prchk_reftag": false, 00:20:58.312 "prchk_guard": false, 00:20:58.312 "ctrlr_loss_timeout_sec": 0, 00:20:58.312 "reconnect_delay_sec": 0, 00:20:58.312 "fast_io_fail_timeout_sec": 0, 00:20:58.312 "psk": "/tmp/tmp.BjCdW02lfE", 00:20:58.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.312 "hdgst": false, 00:20:58.312 "ddgst": false 00:20:58.312 } 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "method": "bdev_nvme_set_hotplug", 00:20:58.312 "params": { 00:20:58.312 "period_us": 100000, 00:20:58.312 "enable": false 00:20:58.312 } 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "method": "bdev_wait_for_examine" 00:20:58.312 } 00:20:58.312 ] 00:20:58.312 }, 00:20:58.312 { 00:20:58.312 "subsystem": "nbd", 00:20:58.312 "config": [] 00:20:58.312 } 00:20:58.312 ] 00:20:58.312 }' 00:20:58.312 [2024-05-13 20:34:14.055090] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:58.312 [2024-05-13 20:34:14.055141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080848 ] 00:20:58.312 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.312 [2024-05-13 20:34:14.109809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.312 [2024-05-13 20:34:14.161853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.573 [2024-05-13 20:34:14.278321] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.573 [2024-05-13 20:34:14.278383] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:59.145 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:59.145 20:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:59.145 20:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:59.145 Running I/O for 10 seconds... 00:21:09.155 00:21:09.155 Latency(us) 00:21:09.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.155 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.155 Verification LBA range: start 0x0 length 0x2000 00:21:09.155 TLSTESTn1 : 10.01 5349.99 20.90 0.00 0.00 23889.63 6198.61 55268.69 00:21:09.155 =================================================================================================================== 00:21:09.155 Total : 5349.99 20.90 0.00 0.00 23889.63 6198.61 55268.69 00:21:09.155 0 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3080848 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3080848 ']' 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3080848 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:09.156 20:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3080848 00:21:09.156 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:09.156 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:09.156 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3080848' 00:21:09.156 killing process with pid 3080848 00:21:09.156 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3080848 00:21:09.156 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.156 00:21:09.156 Latency(us) 00:21:09.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.156 =================================================================================================================== 00:21:09.156 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.156 [2024-05-13 20:34:25.016609] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:09.156 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3080848 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3080818 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3080818 ']' 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3080818 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3080818 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3080818' 00:21:09.418 killing process with pid 3080818 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3080818 00:21:09.418 [2024-05-13 20:34:25.185815] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:09.418 [2024-05-13 20:34:25.185846] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3080818 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3083185 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3083185 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3083185 ']' 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:09.418 20:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.418 [2024-05-13 20:34:25.358148] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:09.418 [2024-05-13 20:34:25.358204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.680 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.680 [2024-05-13 20:34:25.429265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.680 [2024-05-13 20:34:25.494244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.680 [2024-05-13 20:34:25.494283] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.680 [2024-05-13 20:34:25.494290] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.680 [2024-05-13 20:34:25.494296] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.680 [2024-05-13 20:34:25.494302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.680 [2024-05-13 20:34:25.494324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.252 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.BjCdW02lfE 00:21:10.253 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BjCdW02lfE 00:21:10.253 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.514 [2024-05-13 20:34:26.300946] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.514 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:10.774 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.775 [2024-05-13 20:34:26.589649] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:10.775 [2024-05-13 20:34:26.589701] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.775 [2024-05-13 20:34:26.589894] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.775 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:11.036 malloc0 00:21:11.036 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:11.036 20:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BjCdW02lfE 00:21:11.297 [2024-05-13 20:34:27.005555] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3083549 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3083549 /var/tmp/bdevperf.sock 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3083549 ']' 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:11.297 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.297 [2024-05-13 20:34:27.074113] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:11.297 [2024-05-13 20:34:27.074164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083549 ] 00:21:11.297 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.297 [2024-05-13 20:34:27.156367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.297 [2024-05-13 20:34:27.209980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.241 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:12.241 20:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:12.241 20:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BjCdW02lfE 00:21:12.241 20:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.241 [2024-05-13 20:34:28.120034] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.502 nvme0n1 00:21:12.502 20:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.502 Running I/O for 1 seconds... 00:21:13.444 00:21:13.444 Latency(us) 00:21:13.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.444 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:13.444 Verification LBA range: start 0x0 length 0x2000 00:21:13.444 nvme0n1 : 1.05 4385.78 17.13 0.00 0.00 28560.75 4669.44 48278.19 00:21:13.444 =================================================================================================================== 00:21:13.444 Total : 4385.78 17.13 0.00 0.00 28560.75 4669.44 48278.19 00:21:13.444 0 00:21:13.444 20:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3083549 00:21:13.445 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3083549 ']' 00:21:13.445 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3083549 00:21:13.445 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:13.445 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.445 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3083549 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3083549' 00:21:13.706 killing process with pid 3083549 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3083549 00:21:13.706 Received shutdown signal, test time was about 1.000000 seconds 00:21:13.706 00:21:13.706 Latency(us) 00:21:13.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.706 =================================================================================================================== 00:21:13.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3083549 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3083185 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3083185 ']' 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3083185 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3083185 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3083185' 00:21:13.706 killing process with pid 3083185 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3083185 00:21:13.706 [2024-05-13 20:34:29.602364] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:13.706 [2024-05-13 20:34:29.602402] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:13.706 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3083185 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3083932 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3083932 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3083932 ']' 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:13.968 20:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.968 [2024-05-13 20:34:29.800422] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:13.968 [2024-05-13 20:34:29.800513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.968 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.968 [2024-05-13 20:34:29.875995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.230 [2024-05-13 20:34:29.941197] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.230 [2024-05-13 20:34:29.941232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.230 [2024-05-13 20:34:29.941240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.230 [2024-05-13 20:34:29.941246] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.230 [2024-05-13 20:34:29.941252] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.230 [2024-05-13 20:34:29.941276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.802 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:14.802 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.803 [2024-05-13 20:34:30.608067] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.803 malloc0 00:21:14.803 [2024-05-13 20:34:30.634715] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:14.803 [2024-05-13 20:34:30.634762] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.803 [2024-05-13 20:34:30.634960] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3084250 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3084250 /var/tmp/bdevperf.sock 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3084250 ']' 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.803 20:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.803 [2024-05-13 20:34:30.711038] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:14.803 [2024-05-13 20:34:30.711082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084250 ] 00:21:14.803 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.064 [2024-05-13 20:34:30.792826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.064 [2024-05-13 20:34:30.846299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.637 20:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:15.637 20:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:15.637 20:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BjCdW02lfE 00:21:15.898 20:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:15.898 [2024-05-13 20:34:31.744358] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.898 nvme0n1 00:21:16.160 20:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.160 Running I/O for 1 seconds... 00:21:17.105 00:21:17.105 Latency(us) 00:21:17.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.105 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.105 Verification LBA range: start 0x0 length 0x2000 00:21:17.105 nvme0n1 : 1.02 5490.50 21.45 0.00 0.00 23071.12 6335.15 100925.44 00:21:17.105 =================================================================================================================== 00:21:17.105 Total : 5490.50 21.45 0.00 0.00 23071.12 6335.15 100925.44 00:21:17.105 0 00:21:17.105 20:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:17.106 20:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.106 20:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.367 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.367 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:17.367 "subsystems": [ 00:21:17.367 { 00:21:17.367 "subsystem": "keyring", 00:21:17.367 "config": [ 00:21:17.367 { 00:21:17.367 "method": "keyring_file_add_key", 00:21:17.367 "params": { 00:21:17.367 "name": "key0", 00:21:17.367 "path": "/tmp/tmp.BjCdW02lfE" 00:21:17.367 } 00:21:17.367 } 00:21:17.367 ] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "iobuf", 00:21:17.367 "config": [ 00:21:17.367 { 00:21:17.367 "method": "iobuf_set_options", 00:21:17.367 "params": { 00:21:17.367 "small_pool_count": 8192, 00:21:17.367 "large_pool_count": 1024, 00:21:17.367 "small_bufsize": 8192, 00:21:17.367 "large_bufsize": 135168 00:21:17.367 } 00:21:17.367 } 00:21:17.367 ] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "sock", 00:21:17.367 "config": [ 00:21:17.367 { 00:21:17.367 "method": "sock_impl_set_options", 00:21:17.367 "params": { 00:21:17.367 "impl_name": "posix", 00:21:17.367 "recv_buf_size": 2097152, 00:21:17.367 "send_buf_size": 2097152, 00:21:17.367 "enable_recv_pipe": true, 00:21:17.367 "enable_quickack": false, 00:21:17.367 "enable_placement_id": 0, 00:21:17.367 "enable_zerocopy_send_server": true, 00:21:17.367 "enable_zerocopy_send_client": false, 00:21:17.367 "zerocopy_threshold": 0, 00:21:17.367 "tls_version": 0, 00:21:17.367 "enable_ktls": false 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "sock_impl_set_options", 00:21:17.367 "params": { 00:21:17.367 "impl_name": "ssl", 00:21:17.367 "recv_buf_size": 4096, 00:21:17.367 "send_buf_size": 4096, 00:21:17.367 "enable_recv_pipe": true, 00:21:17.367 "enable_quickack": false, 00:21:17.367 "enable_placement_id": 0, 00:21:17.367 "enable_zerocopy_send_server": true, 00:21:17.367 "enable_zerocopy_send_client": false, 00:21:17.367 "zerocopy_threshold": 0, 00:21:17.367 "tls_version": 0, 00:21:17.367 "enable_ktls": false 00:21:17.367 } 00:21:17.367 } 00:21:17.367 ] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "vmd", 00:21:17.367 "config": [] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "accel", 00:21:17.367 "config": [ 00:21:17.367 { 00:21:17.367 "method": "accel_set_options", 00:21:17.367 "params": { 00:21:17.367 "small_cache_size": 128, 00:21:17.367 "large_cache_size": 16, 00:21:17.367 "task_count": 2048, 00:21:17.367 "sequence_count": 2048, 00:21:17.367 "buf_count": 2048 00:21:17.367 } 00:21:17.367 } 00:21:17.367 ] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "bdev", 00:21:17.367 "config": [ 00:21:17.367 { 00:21:17.367 "method": "bdev_set_options", 00:21:17.367 "params": { 00:21:17.367 "bdev_io_pool_size": 65535, 00:21:17.367 "bdev_io_cache_size": 256, 00:21:17.367 "bdev_auto_examine": true, 00:21:17.367 "iobuf_small_cache_size": 128, 00:21:17.367 "iobuf_large_cache_size": 16 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "bdev_raid_set_options", 00:21:17.367 "params": { 00:21:17.367 "process_window_size_kb": 1024 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "bdev_iscsi_set_options", 00:21:17.367 "params": { 00:21:17.367 "timeout_sec": 30 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "bdev_nvme_set_options", 00:21:17.367 "params": { 00:21:17.367 "action_on_timeout": "none", 00:21:17.367 "timeout_us": 0, 00:21:17.367 "timeout_admin_us": 0, 00:21:17.367 "keep_alive_timeout_ms": 10000, 00:21:17.367 "arbitration_burst": 0, 00:21:17.367 "low_priority_weight": 0, 00:21:17.367 "medium_priority_weight": 0, 00:21:17.367 "high_priority_weight": 0, 00:21:17.367 "nvme_adminq_poll_period_us": 10000, 00:21:17.367 "nvme_ioq_poll_period_us": 0, 00:21:17.367 "io_queue_requests": 0, 00:21:17.367 "delay_cmd_submit": true, 00:21:17.367 "transport_retry_count": 4, 00:21:17.367 "bdev_retry_count": 3, 00:21:17.367 "transport_ack_timeout": 0, 00:21:17.367 "ctrlr_loss_timeout_sec": 0, 00:21:17.367 "reconnect_delay_sec": 0, 00:21:17.367 "fast_io_fail_timeout_sec": 0, 00:21:17.367 "disable_auto_failback": false, 00:21:17.367 "generate_uuids": false, 00:21:17.367 "transport_tos": 0, 00:21:17.367 "nvme_error_stat": false, 00:21:17.367 "rdma_srq_size": 0, 00:21:17.367 "io_path_stat": false, 00:21:17.367 "allow_accel_sequence": false, 00:21:17.367 "rdma_max_cq_size": 0, 00:21:17.367 "rdma_cm_event_timeout_ms": 0, 00:21:17.367 "dhchap_digests": [ 00:21:17.367 "sha256", 00:21:17.367 "sha384", 00:21:17.367 "sha512" 00:21:17.367 ], 00:21:17.367 "dhchap_dhgroups": [ 00:21:17.367 "null", 00:21:17.367 "ffdhe2048", 00:21:17.367 "ffdhe3072", 00:21:17.367 "ffdhe4096", 00:21:17.367 "ffdhe6144", 00:21:17.367 "ffdhe8192" 00:21:17.367 ] 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "bdev_nvme_set_hotplug", 00:21:17.367 "params": { 00:21:17.367 "period_us": 100000, 00:21:17.367 "enable": false 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "bdev_malloc_create", 00:21:17.367 "params": { 00:21:17.367 "name": "malloc0", 00:21:17.367 "num_blocks": 8192, 00:21:17.367 "block_size": 4096, 00:21:17.367 "physical_block_size": 4096, 00:21:17.367 "uuid": "319d6f31-2476-45f9-b2ee-018926a2a7f7", 00:21:17.367 "optimal_io_boundary": 0 00:21:17.367 } 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "method": "bdev_wait_for_examine" 00:21:17.367 } 00:21:17.367 ] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "nbd", 00:21:17.367 "config": [] 00:21:17.367 }, 00:21:17.367 { 00:21:17.367 "subsystem": "scheduler", 00:21:17.367 "config": [ 00:21:17.367 { 00:21:17.367 "method": "framework_set_scheduler", 00:21:17.368 "params": { 00:21:17.368 "name": "static" 00:21:17.368 } 00:21:17.368 } 00:21:17.368 ] 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "subsystem": "nvmf", 00:21:17.368 "config": [ 00:21:17.368 { 00:21:17.368 "method": "nvmf_set_config", 00:21:17.368 "params": { 00:21:17.368 "discovery_filter": "match_any", 00:21:17.368 "admin_cmd_passthru": { 00:21:17.368 "identify_ctrlr": false 00:21:17.368 } 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_set_max_subsystems", 00:21:17.368 "params": { 00:21:17.368 "max_subsystems": 1024 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_set_crdt", 00:21:17.368 "params": { 00:21:17.368 "crdt1": 0, 00:21:17.368 "crdt2": 0, 00:21:17.368 "crdt3": 0 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_create_transport", 00:21:17.368 "params": { 00:21:17.368 "trtype": "TCP", 00:21:17.368 "max_queue_depth": 128, 00:21:17.368 "max_io_qpairs_per_ctrlr": 127, 00:21:17.368 "in_capsule_data_size": 4096, 00:21:17.368 "max_io_size": 131072, 00:21:17.368 "io_unit_size": 131072, 00:21:17.368 "max_aq_depth": 128, 00:21:17.368 "num_shared_buffers": 511, 00:21:17.368 "buf_cache_size": 4294967295, 00:21:17.368 "dif_insert_or_strip": false, 00:21:17.368 "zcopy": false, 00:21:17.368 "c2h_success": false, 00:21:17.368 "sock_priority": 0, 00:21:17.368 "abort_timeout_sec": 1, 00:21:17.368 "ack_timeout": 0, 00:21:17.368 "data_wr_pool_size": 0 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_create_subsystem", 00:21:17.368 "params": { 00:21:17.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.368 "allow_any_host": false, 00:21:17.368 "serial_number": "00000000000000000000", 00:21:17.368 "model_number": "SPDK bdev Controller", 00:21:17.368 "max_namespaces": 32, 00:21:17.368 "min_cntlid": 1, 00:21:17.368 "max_cntlid": 65519, 00:21:17.368 "ana_reporting": false 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_subsystem_add_host", 00:21:17.368 "params": { 00:21:17.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.368 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.368 "psk": "key0" 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_subsystem_add_ns", 00:21:17.368 "params": { 00:21:17.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.368 "namespace": { 00:21:17.368 "nsid": 1, 00:21:17.368 "bdev_name": "malloc0", 00:21:17.368 "nguid": "319D6F31247645F9B2EE018926A2A7F7", 00:21:17.368 "uuid": "319d6f31-2476-45f9-b2ee-018926a2a7f7", 00:21:17.368 "no_auto_visible": false 00:21:17.368 } 00:21:17.368 } 00:21:17.368 }, 00:21:17.368 { 00:21:17.368 "method": "nvmf_subsystem_add_listener", 00:21:17.368 "params": { 00:21:17.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.368 "listen_address": { 00:21:17.368 "trtype": "TCP", 00:21:17.368 "adrfam": "IPv4", 00:21:17.368 "traddr": "10.0.0.2", 00:21:17.368 "trsvcid": "4420" 00:21:17.368 }, 00:21:17.368 "secure_channel": true 00:21:17.368 } 00:21:17.368 } 00:21:17.368 ] 00:21:17.368 } 00:21:17.368 ] 00:21:17.368 }' 00:21:17.368 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:17.630 "subsystems": [ 00:21:17.630 { 00:21:17.630 "subsystem": "keyring", 00:21:17.630 "config": [ 00:21:17.630 { 00:21:17.630 "method": "keyring_file_add_key", 00:21:17.630 "params": { 00:21:17.630 "name": "key0", 00:21:17.630 "path": "/tmp/tmp.BjCdW02lfE" 00:21:17.630 } 00:21:17.630 } 00:21:17.630 ] 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "subsystem": "iobuf", 00:21:17.630 "config": [ 00:21:17.630 { 00:21:17.630 "method": "iobuf_set_options", 00:21:17.630 "params": { 00:21:17.630 "small_pool_count": 8192, 00:21:17.630 "large_pool_count": 1024, 00:21:17.630 "small_bufsize": 8192, 00:21:17.630 "large_bufsize": 135168 00:21:17.630 } 00:21:17.630 } 00:21:17.630 ] 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "subsystem": "sock", 00:21:17.630 "config": [ 00:21:17.630 { 00:21:17.630 "method": "sock_impl_set_options", 00:21:17.630 "params": { 00:21:17.630 "impl_name": "posix", 00:21:17.630 "recv_buf_size": 2097152, 00:21:17.630 "send_buf_size": 2097152, 00:21:17.630 "enable_recv_pipe": true, 00:21:17.630 "enable_quickack": false, 00:21:17.630 "enable_placement_id": 0, 00:21:17.630 "enable_zerocopy_send_server": true, 00:21:17.630 "enable_zerocopy_send_client": false, 00:21:17.630 "zerocopy_threshold": 0, 00:21:17.630 "tls_version": 0, 00:21:17.630 "enable_ktls": false 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "sock_impl_set_options", 00:21:17.630 "params": { 00:21:17.630 "impl_name": "ssl", 00:21:17.630 "recv_buf_size": 4096, 00:21:17.630 "send_buf_size": 4096, 00:21:17.630 "enable_recv_pipe": true, 00:21:17.630 "enable_quickack": false, 00:21:17.630 "enable_placement_id": 0, 00:21:17.630 "enable_zerocopy_send_server": true, 00:21:17.630 "enable_zerocopy_send_client": false, 00:21:17.630 "zerocopy_threshold": 0, 00:21:17.630 "tls_version": 0, 00:21:17.630 "enable_ktls": false 00:21:17.630 } 00:21:17.630 } 00:21:17.630 ] 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "subsystem": "vmd", 00:21:17.630 "config": [] 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "subsystem": "accel", 00:21:17.630 "config": [ 00:21:17.630 { 00:21:17.630 "method": "accel_set_options", 00:21:17.630 "params": { 00:21:17.630 "small_cache_size": 128, 00:21:17.630 "large_cache_size": 16, 00:21:17.630 "task_count": 2048, 00:21:17.630 "sequence_count": 2048, 00:21:17.630 "buf_count": 2048 00:21:17.630 } 00:21:17.630 } 00:21:17.630 ] 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "subsystem": "bdev", 00:21:17.630 "config": [ 00:21:17.630 { 00:21:17.630 "method": "bdev_set_options", 00:21:17.630 "params": { 00:21:17.630 "bdev_io_pool_size": 65535, 00:21:17.630 "bdev_io_cache_size": 256, 00:21:17.630 "bdev_auto_examine": true, 00:21:17.630 "iobuf_small_cache_size": 128, 00:21:17.630 "iobuf_large_cache_size": 16 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_raid_set_options", 00:21:17.630 "params": { 00:21:17.630 "process_window_size_kb": 1024 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_iscsi_set_options", 00:21:17.630 "params": { 00:21:17.630 "timeout_sec": 30 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_nvme_set_options", 00:21:17.630 "params": { 00:21:17.630 "action_on_timeout": "none", 00:21:17.630 "timeout_us": 0, 00:21:17.630 "timeout_admin_us": 0, 00:21:17.630 "keep_alive_timeout_ms": 10000, 00:21:17.630 "arbitration_burst": 0, 00:21:17.630 "low_priority_weight": 0, 00:21:17.630 "medium_priority_weight": 0, 00:21:17.630 "high_priority_weight": 0, 00:21:17.630 "nvme_adminq_poll_period_us": 10000, 00:21:17.630 "nvme_ioq_poll_period_us": 0, 00:21:17.630 "io_queue_requests": 512, 00:21:17.630 "delay_cmd_submit": true, 00:21:17.630 "transport_retry_count": 4, 00:21:17.630 "bdev_retry_count": 3, 00:21:17.630 "transport_ack_timeout": 0, 00:21:17.630 "ctrlr_loss_timeout_sec": 0, 00:21:17.630 "reconnect_delay_sec": 0, 00:21:17.630 "fast_io_fail_timeout_sec": 0, 00:21:17.630 "disable_auto_failback": false, 00:21:17.630 "generate_uuids": false, 00:21:17.630 "transport_tos": 0, 00:21:17.630 "nvme_error_stat": false, 00:21:17.630 "rdma_srq_size": 0, 00:21:17.630 "io_path_stat": false, 00:21:17.630 "allow_accel_sequence": false, 00:21:17.630 "rdma_max_cq_size": 0, 00:21:17.630 "rdma_cm_event_timeout_ms": 0, 00:21:17.630 "dhchap_digests": [ 00:21:17.630 "sha256", 00:21:17.630 "sha384", 00:21:17.630 "sha512" 00:21:17.630 ], 00:21:17.630 "dhchap_dhgroups": [ 00:21:17.630 "null", 00:21:17.630 "ffdhe2048", 00:21:17.630 "ffdhe3072", 00:21:17.630 "ffdhe4096", 00:21:17.630 "ffdhe6144", 00:21:17.630 "ffdhe8192" 00:21:17.630 ] 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_nvme_attach_controller", 00:21:17.630 "params": { 00:21:17.630 "name": "nvme0", 00:21:17.630 "trtype": "TCP", 00:21:17.630 "adrfam": "IPv4", 00:21:17.630 "traddr": "10.0.0.2", 00:21:17.630 "trsvcid": "4420", 00:21:17.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.630 "prchk_reftag": false, 00:21:17.630 "prchk_guard": false, 00:21:17.630 "ctrlr_loss_timeout_sec": 0, 00:21:17.630 "reconnect_delay_sec": 0, 00:21:17.630 "fast_io_fail_timeout_sec": 0, 00:21:17.630 "psk": "key0", 00:21:17.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.630 "hdgst": false, 00:21:17.630 "ddgst": false 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_nvme_set_hotplug", 00:21:17.630 "params": { 00:21:17.630 "period_us": 100000, 00:21:17.630 "enable": false 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_enable_histogram", 00:21:17.630 "params": { 00:21:17.630 "name": "nvme0n1", 00:21:17.630 "enable": true 00:21:17.630 } 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "method": "bdev_wait_for_examine" 00:21:17.630 } 00:21:17.630 ] 00:21:17.630 }, 00:21:17.630 { 00:21:17.630 "subsystem": "nbd", 00:21:17.630 "config": [] 00:21:17.630 } 00:21:17.630 ] 00:21:17.630 }' 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3084250 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3084250 ']' 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3084250 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3084250 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3084250' 00:21:17.630 killing process with pid 3084250 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3084250 00:21:17.630 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.630 00:21:17.630 Latency(us) 00:21:17.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.630 =================================================================================================================== 00:21:17.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.630 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3084250 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3083932 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3083932 ']' 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3083932 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3083932 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3083932' 00:21:17.631 killing process with pid 3083932 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3083932 00:21:17.631 [2024-05-13 20:34:33.538362] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:17.631 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3083932 00:21:17.893 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:17.893 20:34:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.893 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:17.893 20:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:17.893 "subsystems": [ 00:21:17.893 { 00:21:17.893 "subsystem": "keyring", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "keyring_file_add_key", 00:21:17.893 "params": { 00:21:17.893 "name": "key0", 00:21:17.893 "path": "/tmp/tmp.BjCdW02lfE" 00:21:17.893 } 00:21:17.893 } 00:21:17.893 ] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "iobuf", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "iobuf_set_options", 00:21:17.893 "params": { 00:21:17.893 "small_pool_count": 8192, 00:21:17.893 "large_pool_count": 1024, 00:21:17.893 "small_bufsize": 8192, 00:21:17.893 "large_bufsize": 135168 00:21:17.893 } 00:21:17.893 } 00:21:17.893 ] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "sock", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "sock_impl_set_options", 00:21:17.893 "params": { 00:21:17.893 "impl_name": "posix", 00:21:17.893 "recv_buf_size": 2097152, 00:21:17.893 "send_buf_size": 2097152, 00:21:17.893 "enable_recv_pipe": true, 00:21:17.893 "enable_quickack": false, 00:21:17.893 "enable_placement_id": 0, 00:21:17.893 "enable_zerocopy_send_server": true, 00:21:17.893 "enable_zerocopy_send_client": false, 00:21:17.893 "zerocopy_threshold": 0, 00:21:17.893 "tls_version": 0, 00:21:17.893 "enable_ktls": false 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "sock_impl_set_options", 00:21:17.893 "params": { 00:21:17.893 "impl_name": "ssl", 00:21:17.893 "recv_buf_size": 4096, 00:21:17.893 "send_buf_size": 4096, 00:21:17.893 "enable_recv_pipe": true, 00:21:17.893 "enable_quickack": false, 00:21:17.893 "enable_placement_id": 0, 00:21:17.893 "enable_zerocopy_send_server": true, 00:21:17.893 "enable_zerocopy_send_client": false, 00:21:17.893 "zerocopy_threshold": 0, 00:21:17.893 "tls_version": 0, 00:21:17.893 "enable_ktls": false 00:21:17.893 } 00:21:17.893 } 00:21:17.893 ] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "vmd", 00:21:17.893 "config": [] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "accel", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "accel_set_options", 00:21:17.893 "params": { 00:21:17.893 "small_cache_size": 128, 00:21:17.893 "large_cache_size": 16, 00:21:17.893 "task_count": 2048, 00:21:17.893 "sequence_count": 2048, 00:21:17.893 "buf_count": 2048 00:21:17.893 } 00:21:17.893 } 00:21:17.893 ] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "bdev", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "bdev_set_options", 00:21:17.893 "params": { 00:21:17.893 "bdev_io_pool_size": 65535, 00:21:17.893 "bdev_io_cache_size": 256, 00:21:17.893 "bdev_auto_examine": true, 00:21:17.893 "iobuf_small_cache_size": 128, 00:21:17.893 "iobuf_large_cache_size": 16 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "bdev_raid_set_options", 00:21:17.893 "params": { 00:21:17.893 "process_window_size_kb": 1024 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "bdev_iscsi_set_options", 00:21:17.893 "params": { 00:21:17.893 "timeout_sec": 30 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "bdev_nvme_set_options", 00:21:17.893 "params": { 00:21:17.893 "action_on_timeout": "none", 00:21:17.893 "timeout_us": 0, 00:21:17.893 "timeout_admin_us": 0, 00:21:17.893 "keep_alive_timeout_ms": 10000, 00:21:17.893 "arbitration_burst": 0, 00:21:17.893 "low_priority_weight": 0, 00:21:17.893 "medium_priority_weight": 0, 00:21:17.893 "high_priority_weight": 0, 00:21:17.893 "nvme_adminq_poll_period_us": 10000, 00:21:17.893 "nvme_ioq_poll_period_us": 0, 00:21:17.893 "io_queue_requests": 0, 00:21:17.893 "delay_cmd_submit": true, 00:21:17.893 "transport_retry_count": 4, 00:21:17.893 "bdev_retry_count": 3, 00:21:17.893 "transport_ack_timeout": 0, 00:21:17.893 "ctrlr_loss_timeout_sec": 0, 00:21:17.893 "reconnect_delay_sec": 0, 00:21:17.893 "fast_io_fail_timeout_sec": 0, 00:21:17.893 "disable_auto_failback": false, 00:21:17.893 "generate_uuids": false, 00:21:17.893 "transport_tos": 0, 00:21:17.893 "nvme_error_stat": false, 00:21:17.893 "rdma_srq_size": 0, 00:21:17.893 "io_path_stat": false, 00:21:17.893 "allow_accel_sequence": false, 00:21:17.893 "rdma_max_cq_size": 0, 00:21:17.893 "rdma_cm_event_timeout_ms": 0, 00:21:17.893 "dhchap_digests": [ 00:21:17.893 "sha256", 00:21:17.893 "sha384", 00:21:17.893 "sha512" 00:21:17.893 ], 00:21:17.893 "dhchap_dhgroups": [ 00:21:17.893 "null", 00:21:17.893 "ffdhe2048", 00:21:17.893 "ffdhe3072", 00:21:17.893 "ffdhe4096", 00:21:17.893 "ffdhe6144", 00:21:17.893 "ffdhe8192" 00:21:17.893 ] 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "bdev_nvme_set_hotplug", 00:21:17.893 "params": { 00:21:17.893 "period_us": 100000, 00:21:17.893 "enable": false 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "bdev_malloc_create", 00:21:17.893 "params": { 00:21:17.893 "name": "malloc0", 00:21:17.893 "num_blocks": 8192, 00:21:17.893 "block_size": 4096, 00:21:17.893 "physical_block_size": 4096, 00:21:17.893 "uuid": "319d6f31-2476-45f9-b2ee-018926a2a7f7", 00:21:17.893 "optimal_io_boundary": 0 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "bdev_wait_for_examine" 00:21:17.893 } 00:21:17.893 ] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "nbd", 00:21:17.893 "config": [] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "scheduler", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "framework_set_scheduler", 00:21:17.893 "params": { 00:21:17.893 "name": "static" 00:21:17.893 } 00:21:17.893 } 00:21:17.893 ] 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "subsystem": "nvmf", 00:21:17.893 "config": [ 00:21:17.893 { 00:21:17.893 "method": "nvmf_set_config", 00:21:17.893 "params": { 00:21:17.893 "discovery_filter": "match_any", 00:21:17.893 "admin_cmd_passthru": { 00:21:17.893 "identify_ctrlr": false 00:21:17.893 } 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "nvmf_set_max_subsystems", 00:21:17.893 "params": { 00:21:17.893 "max_subsystems": 1024 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "nvmf_set_crdt", 00:21:17.893 "params": { 00:21:17.893 "crdt1": 0, 00:21:17.893 "crdt2": 0, 00:21:17.893 "crdt3": 0 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "nvmf_create_transport", 00:21:17.893 "params": { 00:21:17.893 "trtype": "TCP", 00:21:17.893 "max_queue_depth": 128, 00:21:17.893 "max_io_qpairs_per_ctrlr": 127, 00:21:17.893 "in_capsule_data_size": 4096, 00:21:17.893 "max_io_size": 131072, 00:21:17.893 "io_unit_size": 131072, 00:21:17.893 "max_aq_depth": 128, 00:21:17.893 "num_shared_buffers": 511, 00:21:17.893 "buf_cache_size": 4294967295, 00:21:17.893 "dif_insert_or_strip": false, 00:21:17.893 "zcopy": false, 00:21:17.893 "c2h_success": false, 00:21:17.893 "sock_priority": 0, 00:21:17.893 "abort_timeout_sec": 1, 00:21:17.893 "ack_timeout": 0, 00:21:17.893 "data_wr_pool_size": 0 00:21:17.893 } 00:21:17.893 }, 00:21:17.893 { 00:21:17.893 "method": "nvmf_create_subsystem", 00:21:17.893 "params": { 00:21:17.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.894 "allow_any_host": false, 00:21:17.894 "serial_number": "00000000000000000000", 00:21:17.894 "model_number": "SPDK bdev Controller", 00:21:17.894 "max_namespaces": 32, 00:21:17.894 "min_cntlid": 1, 00:21:17.894 "max_cntlid": 65519, 00:21:17.894 "ana_reporting": false 00:21:17.894 } 00:21:17.894 }, 00:21:17.894 { 00:21:17.894 "method": "nvmf_subsystem_add_host", 00:21:17.894 "params": { 00:21:17.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.894 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.894 "psk": "key0" 00:21:17.894 } 00:21:17.894 }, 00:21:17.894 { 00:21:17.894 "method": "nvmf_subsystem_add_ns", 00:21:17.894 "params": { 00:21:17.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.894 "namespace": { 00:21:17.894 "nsid": 1, 00:21:17.894 "bdev_name": "malloc0", 00:21:17.894 "nguid": "319D6F31247645F9B2EE018926A2A7F7", 00:21:17.894 "uuid": "319d6f31-2476-45f9-b2ee-018926a2a7f7", 00:21:17.894 "no_auto_visible": false 00:21:17.894 } 00:21:17.894 } 00:21:17.894 }, 00:21:17.894 { 00:21:17.894 "method": "nvmf_subsystem_add_listener", 00:21:17.894 "params": { 00:21:17.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.894 "listen_address": { 00:21:17.894 "trtype": "TCP", 00:21:17.894 "adrfam": "IPv4", 00:21:17.894 "traddr": "10.0.0.2", 00:21:17.894 "trsvcid": "4420" 00:21:17.894 }, 00:21:17.894 "secure_channel": true 00:21:17.894 } 00:21:17.894 } 00:21:17.894 ] 00:21:17.894 } 00:21:17.894 ] 00:21:17.894 }' 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3084886 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3084886 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3084886 ']' 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.894 20:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.894 [2024-05-13 20:34:33.740010] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:17.894 [2024-05-13 20:34:33.740067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.894 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.894 [2024-05-13 20:34:33.811597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.154 [2024-05-13 20:34:33.876741] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.154 [2024-05-13 20:34:33.876779] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.154 [2024-05-13 20:34:33.876787] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.155 [2024-05-13 20:34:33.876793] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.155 [2024-05-13 20:34:33.876799] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.155 [2024-05-13 20:34:33.876855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.155 [2024-05-13 20:34:34.065846] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.155 [2024-05-13 20:34:34.097828] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:18.155 [2024-05-13 20:34:34.097874] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.415 [2024-05-13 20:34:34.110644] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3084973 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3084973 /var/tmp/bdevperf.sock 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3084973 ']' 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.677 20:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:18.677 "subsystems": [ 00:21:18.677 { 00:21:18.677 "subsystem": "keyring", 00:21:18.677 "config": [ 00:21:18.677 { 00:21:18.677 "method": "keyring_file_add_key", 00:21:18.677 "params": { 00:21:18.677 "name": "key0", 00:21:18.677 "path": "/tmp/tmp.BjCdW02lfE" 00:21:18.677 } 00:21:18.677 } 00:21:18.677 ] 00:21:18.677 }, 00:21:18.677 { 00:21:18.677 "subsystem": "iobuf", 00:21:18.677 "config": [ 00:21:18.677 { 00:21:18.677 "method": "iobuf_set_options", 00:21:18.677 "params": { 00:21:18.677 "small_pool_count": 8192, 00:21:18.677 "large_pool_count": 1024, 00:21:18.677 "small_bufsize": 8192, 00:21:18.677 "large_bufsize": 135168 00:21:18.677 } 00:21:18.677 } 00:21:18.677 ] 00:21:18.677 }, 00:21:18.677 { 00:21:18.677 "subsystem": "sock", 00:21:18.677 "config": [ 00:21:18.677 { 00:21:18.677 "method": "sock_impl_set_options", 00:21:18.677 "params": { 00:21:18.677 "impl_name": "posix", 00:21:18.677 "recv_buf_size": 2097152, 00:21:18.677 "send_buf_size": 2097152, 00:21:18.677 "enable_recv_pipe": true, 00:21:18.677 "enable_quickack": false, 00:21:18.677 "enable_placement_id": 0, 00:21:18.677 "enable_zerocopy_send_server": true, 00:21:18.677 "enable_zerocopy_send_client": false, 00:21:18.677 "zerocopy_threshold": 0, 00:21:18.677 "tls_version": 0, 00:21:18.677 "enable_ktls": false 00:21:18.677 } 00:21:18.677 }, 00:21:18.677 { 00:21:18.677 "method": "sock_impl_set_options", 00:21:18.677 "params": { 00:21:18.677 "impl_name": "ssl", 00:21:18.677 "recv_buf_size": 4096, 00:21:18.677 "send_buf_size": 4096, 00:21:18.677 "enable_recv_pipe": true, 00:21:18.677 "enable_quickack": false, 00:21:18.677 "enable_placement_id": 0, 00:21:18.677 "enable_zerocopy_send_server": true, 00:21:18.677 "enable_zerocopy_send_client": false, 00:21:18.677 "zerocopy_threshold": 0, 00:21:18.677 "tls_version": 0, 00:21:18.677 "enable_ktls": false 00:21:18.677 } 00:21:18.677 } 00:21:18.677 ] 00:21:18.677 }, 00:21:18.677 { 00:21:18.677 "subsystem": "vmd", 00:21:18.677 "config": [] 00:21:18.677 }, 00:21:18.677 { 00:21:18.677 "subsystem": "accel", 00:21:18.677 "config": [ 00:21:18.677 { 00:21:18.677 "method": "accel_set_options", 00:21:18.677 "params": { 00:21:18.677 "small_cache_size": 128, 00:21:18.677 "large_cache_size": 16, 00:21:18.678 "task_count": 2048, 00:21:18.678 "sequence_count": 2048, 00:21:18.678 "buf_count": 2048 00:21:18.678 } 00:21:18.678 } 00:21:18.678 ] 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "subsystem": "bdev", 00:21:18.678 "config": [ 00:21:18.678 { 00:21:18.678 "method": "bdev_set_options", 00:21:18.678 "params": { 00:21:18.678 "bdev_io_pool_size": 65535, 00:21:18.678 "bdev_io_cache_size": 256, 00:21:18.678 "bdev_auto_examine": true, 00:21:18.678 "iobuf_small_cache_size": 128, 00:21:18.678 "iobuf_large_cache_size": 16 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_raid_set_options", 00:21:18.678 "params": { 00:21:18.678 "process_window_size_kb": 1024 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_iscsi_set_options", 00:21:18.678 "params": { 00:21:18.678 "timeout_sec": 30 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_nvme_set_options", 00:21:18.678 "params": { 00:21:18.678 "action_on_timeout": "none", 00:21:18.678 "timeout_us": 0, 00:21:18.678 "timeout_admin_us": 0, 00:21:18.678 "keep_alive_timeout_ms": 10000, 00:21:18.678 "arbitration_burst": 0, 00:21:18.678 "low_priority_weight": 0, 00:21:18.678 "medium_priority_weight": 0, 00:21:18.678 "high_priority_weight": 0, 00:21:18.678 "nvme_adminq_poll_period_us": 10000, 00:21:18.678 "nvme_ioq_poll_period_us": 0, 00:21:18.678 "io_queue_requests": 512, 00:21:18.678 "delay_cmd_submit": true, 00:21:18.678 "transport_retry_count": 4, 00:21:18.678 "bdev_retry_count": 3, 00:21:18.678 "transport_ack_timeout": 0, 00:21:18.678 "ctrlr_loss_timeout_sec": 0, 00:21:18.678 "reconnect_delay_sec": 0, 00:21:18.678 "fast_io_fail_timeout_sec": 0, 00:21:18.678 "disable_auto_failback": false, 00:21:18.678 "generate_uuids": false, 00:21:18.678 "transport_tos": 0, 00:21:18.678 "nvme_error_stat": false, 00:21:18.678 "rdma_srq_size": 0, 00:21:18.678 "io_path_stat": false, 00:21:18.678 "allow_accel_sequence": false, 00:21:18.678 "rdma_max_cq_size": 0, 00:21:18.678 "rdma_cm_event_timeout_ms": 0, 00:21:18.678 "dhchap_digests": [ 00:21:18.678 "sha256", 00:21:18.678 "sha384", 00:21:18.678 "sha512" 00:21:18.678 ], 00:21:18.678 "dhchap_dhgroups": [ 00:21:18.678 "null", 00:21:18.678 "ffdhe2048", 00:21:18.678 "ffdhe3072", 00:21:18.678 "ffdhe4096", 00:21:18.678 "ffdhe6144", 00:21:18.678 "ffdhe8192" 00:21:18.678 ] 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_nvme_attach_controller", 00:21:18.678 "params": { 00:21:18.678 "name": "nvme0", 00:21:18.678 "trtype": "TCP", 00:21:18.678 "adrfam": "IPv4", 00:21:18.678 "traddr": "10.0.0.2", 00:21:18.678 "trsvcid": "4420", 00:21:18.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.678 "prchk_reftag": false, 00:21:18.678 "prchk_guard": false, 00:21:18.678 "ctrlr_loss_timeout_sec": 0, 00:21:18.678 "reconnect_delay_sec": 0, 00:21:18.678 "fast_io_fail_timeout_sec": 0, 00:21:18.678 "psk": "key0", 00:21:18.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.678 "hdgst": false, 00:21:18.678 "ddgst": false 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_nvme_set_hotplug", 00:21:18.678 "params": { 00:21:18.678 "period_us": 100000, 00:21:18.678 "enable": false 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_enable_histogram", 00:21:18.678 "params": { 00:21:18.678 "name": "nvme0n1", 00:21:18.678 "enable": true 00:21:18.678 } 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "method": "bdev_wait_for_examine" 00:21:18.678 } 00:21:18.678 ] 00:21:18.678 }, 00:21:18.678 { 00:21:18.678 "subsystem": "nbd", 00:21:18.678 "config": [] 00:21:18.678 } 00:21:18.678 ] 00:21:18.678 }' 00:21:18.678 [2024-05-13 20:34:34.579612] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:18.678 [2024-05-13 20:34:34.579664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084973 ] 00:21:18.678 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.940 [2024-05-13 20:34:34.661827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.940 [2024-05-13 20:34:34.715477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.940 [2024-05-13 20:34:34.841020] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.512 20:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:19.512 20:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:19.512 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:19.512 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:19.774 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.774 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.774 Running I/O for 1 seconds... 00:21:20.719 00:21:20.719 Latency(us) 00:21:20.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.719 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:20.719 Verification LBA range: start 0x0 length 0x2000 00:21:20.719 nvme0n1 : 1.02 4862.89 19.00 0.00 0.00 26096.80 4614.83 40632.32 00:21:20.719 =================================================================================================================== 00:21:20.719 Total : 4862.89 19.00 0.00 0.00 26096.80 4614.83 40632.32 00:21:20.719 0 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:20.719 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:20.719 nvmf_trace.0 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3084973 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3084973 ']' 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3084973 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3084973 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3084973' 00:21:20.980 killing process with pid 3084973 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3084973 00:21:20.980 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.980 00:21:20.980 Latency(us) 00:21:20.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.980 =================================================================================================================== 00:21:20.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3084973 00:21:20.980 20:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:20.981 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:20.981 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:20.981 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:20.981 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:20.981 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:20.981 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:20.981 rmmod nvme_tcp 00:21:21.241 rmmod nvme_fabrics 00:21:21.241 rmmod nvme_keyring 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3084886 ']' 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3084886 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3084886 ']' 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3084886 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.241 20:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3084886 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3084886' 00:21:21.242 killing process with pid 3084886 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3084886 00:21:21.242 [2024-05-13 20:34:37.040565] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3084886 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.242 20:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.791 20:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:23.791 20:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hwDglSf929 /tmp/tmp.vQhwefbcxn /tmp/tmp.BjCdW02lfE 00:21:23.792 00:21:23.792 real 1m20.706s 00:21:23.792 user 2m1.607s 00:21:23.792 sys 0m27.105s 00:21:23.792 20:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:23.792 20:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.792 ************************************ 00:21:23.792 END TEST nvmf_tls 00:21:23.792 ************************************ 00:21:23.792 20:34:39 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:23.792 20:34:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:23.792 20:34:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:23.792 20:34:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:23.792 ************************************ 00:21:23.792 START TEST nvmf_fips 00:21:23.792 ************************************ 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:23.792 * Looking for test storage... 00:21:23.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:23.792 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:23.793 Error setting digest 00:21:23.793 00F2BADE567F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:23.793 00F2BADE567F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:23.793 20:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:31.985 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:31.985 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:31.985 Found net devices under 0000:31:00.0: cvl_0_0 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:31.985 Found net devices under 0000:31:00.1: cvl_0_1 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:21:31.985 00:21:31.985 --- 10.0.0.2 ping statistics --- 00:21:31.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.985 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:21:31.985 00:21:31.985 --- 10.0.0.1 ping statistics --- 00:21:31.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.985 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.985 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3090358 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3090358 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3090358 ']' 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:32.247 20:34:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 [2024-05-13 20:34:48.009456] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:32.247 [2024-05-13 20:34:48.009520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.247 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.247 [2024-05-13 20:34:48.100796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.508 [2024-05-13 20:34:48.192776] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.508 [2024-05-13 20:34:48.192839] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.508 [2024-05-13 20:34:48.192847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.508 [2024-05-13 20:34:48.192854] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.508 [2024-05-13 20:34:48.192860] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.508 [2024-05-13 20:34:48.192885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.079 20:34:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.079 [2024-05-13 20:34:48.985181] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.080 [2024-05-13 20:34:49.001155] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:33.080 [2024-05-13 20:34:49.001196] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.080 [2024-05-13 20:34:49.001384] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.340 [2024-05-13 20:34:49.028042] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:33.340 malloc0 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3090541 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3090541 /var/tmp/bdevperf.sock 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3090541 ']' 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.340 20:34:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.340 [2024-05-13 20:34:49.107567] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:33.340 [2024-05-13 20:34:49.107625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090541 ] 00:21:33.340 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.340 [2024-05-13 20:34:49.164829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.340 [2024-05-13 20:34:49.222016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.912 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:33.912 20:34:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:33.912 20:34:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:34.172 [2024-05-13 20:34:49.995493] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.172 [2024-05-13 20:34:49.995555] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.172 TLSTESTn1 00:21:34.172 20:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.432 Running I/O for 10 seconds... 00:21:44.433 00:21:44.433 Latency(us) 00:21:44.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.433 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.433 Verification LBA range: start 0x0 length 0x2000 00:21:44.433 TLSTESTn1 : 10.01 5028.28 19.64 0.00 0.00 25421.05 6198.61 138062.51 00:21:44.433 =================================================================================================================== 00:21:44.433 Total : 5028.28 19.64 0.00 0.00 25421.05 6198.61 138062.51 00:21:44.433 0 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.433 nvmf_trace.0 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3090541 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3090541 ']' 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3090541 00:21:44.433 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:44.434 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.434 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3090541 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3090541' 00:21:44.695 killing process with pid 3090541 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3090541 00:21:44.695 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.695 00:21:44.695 Latency(us) 00:21:44.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.695 =================================================================================================================== 00:21:44.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.695 [2024-05-13 20:35:00.383870] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3090541 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.695 rmmod nvme_tcp 00:21:44.695 rmmod nvme_fabrics 00:21:44.695 rmmod nvme_keyring 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3090358 ']' 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3090358 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3090358 ']' 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3090358 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3090358 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3090358' 00:21:44.695 killing process with pid 3090358 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3090358 00:21:44.695 [2024-05-13 20:35:00.606823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:44.695 [2024-05-13 20:35:00.606856] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.695 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3090358 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.956 20:35:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.869 20:35:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.869 20:35:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:46.869 00:21:46.869 real 0m23.468s 00:21:46.869 user 0m24.122s 00:21:46.869 sys 0m9.986s 00:21:46.869 20:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:46.869 20:35:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:46.869 ************************************ 00:21:46.869 END TEST nvmf_fips 00:21:46.869 ************************************ 00:21:47.132 20:35:02 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:21:47.132 20:35:02 nvmf_tcp -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:47.132 20:35:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:47.132 20:35:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:47.132 20:35:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.132 ************************************ 00:21:47.132 START TEST nvmf_fuzz 00:21:47.132 ************************************ 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:47.132 * Looking for test storage... 00:21:47.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.132 20:35:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.132 20:35:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.133 20:35:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:55.277 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:55.277 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.277 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:55.278 Found net devices under 0000:31:00.0: cvl_0_0 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:55.278 Found net devices under 0000:31:00.1: cvl_0_1 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.278 20:35:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:21:55.278 00:21:55.278 --- 10.0.0.2 ping statistics --- 00:21:55.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.278 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:21:55.278 00:21:55.278 --- 10.0.0.1 ping statistics --- 00:21:55.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.278 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3097404 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3097404 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3097404 ']' 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:55.278 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.222 20:35:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.222 Malloc0 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:56.222 20:35:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:28.338 Fuzzing completed. Shutting down the fuzz application 00:22:28.338 00:22:28.338 Dumping successful admin opcodes: 00:22:28.338 8, 9, 10, 24, 00:22:28.338 Dumping successful io opcodes: 00:22:28.338 0, 9, 00:22:28.338 NS: 0x200003aeff00 I/O qp, Total commands completed: 928717, total successful commands: 5411, random_seed: 3157533568 00:22:28.338 NS: 0x200003aeff00 admin qp, Total commands completed: 118209, total successful commands: 967, random_seed: 1119692352 00:22:28.338 20:35:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:28.338 Fuzzing completed. Shutting down the fuzz application 00:22:28.338 00:22:28.338 Dumping successful admin opcodes: 00:22:28.338 24, 00:22:28.338 Dumping successful io opcodes: 00:22:28.338 00:22:28.338 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2039778641 00:22:28.338 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2039849361 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.338 rmmod nvme_tcp 00:22:28.338 rmmod nvme_fabrics 00:22:28.338 rmmod nvme_keyring 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3097404 ']' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3097404 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3097404 ']' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3097404 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3097404 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3097404' 00:22:28.338 killing process with pid 3097404 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3097404 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3097404 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.338 20:35:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.253 20:35:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.253 20:35:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:30.253 00:22:30.253 real 0m43.064s 00:22:30.253 user 0m57.269s 00:22:30.253 sys 0m14.861s 00:22:30.253 20:35:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:30.253 20:35:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:30.253 ************************************ 00:22:30.253 END TEST nvmf_fuzz 00:22:30.253 ************************************ 00:22:30.253 20:35:45 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:30.253 20:35:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:30.253 20:35:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:30.253 20:35:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:30.253 ************************************ 00:22:30.253 START TEST nvmf_multiconnection 00:22:30.253 ************************************ 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:30.253 * Looking for test storage... 00:22:30.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.253 20:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:38.447 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:38.447 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.447 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:38.448 Found net devices under 0000:31:00.0: cvl_0_0 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:38.448 Found net devices under 0000:31:00.1: cvl_0_1 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.448 20:35:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:22:38.448 00:22:38.448 --- 10.0.0.2 ping statistics --- 00:22:38.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.448 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:22:38.448 00:22:38.448 --- 10.0.0.1 ping statistics --- 00:22:38.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.448 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3108413 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3108413 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3108413 ']' 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:38.448 20:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:38.448 [2024-05-13 20:35:54.218649] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:38.448 [2024-05-13 20:35:54.218714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.448 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.448 [2024-05-13 20:35:54.296754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.448 [2024-05-13 20:35:54.372095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.448 [2024-05-13 20:35:54.372136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.448 [2024-05-13 20:35:54.372144] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.448 [2024-05-13 20:35:54.372151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.448 [2024-05-13 20:35:54.372156] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.448 [2024-05-13 20:35:54.372292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.448 [2024-05-13 20:35:54.372421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.448 [2024-05-13 20:35:54.372722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.448 [2024-05-13 20:35:54.372726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.391 [2024-05-13 20:35:55.046887] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:39.391 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 Malloc1 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 [2024-05-13 20:35:55.114081] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:39.392 [2024-05-13 20:35:55.114319] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 Malloc2 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 Malloc3 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 Malloc4 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 Malloc5 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.392 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 Malloc6 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 Malloc7 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 Malloc8 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 Malloc9 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 Malloc10 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.654 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.915 Malloc11 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.915 20:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:41.300 20:35:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:41.300 20:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:41.300 20:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:41.300 20:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:41.300 20:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.846 20:35:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:45.234 20:36:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:45.234 20:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:45.234 20:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.234 20:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:45.234 20:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:47.148 20:36:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:48.530 20:36:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:48.530 20:36:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:48.530 20:36:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:48.530 20:36:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:48.531 20:36:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.071 20:36:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:52.456 20:36:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:52.456 20:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:52.456 20:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:52.456 20:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:52.456 20:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.371 20:36:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:56.289 20:36:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:56.289 20:36:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:56.289 20:36:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.289 20:36:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:56.289 20:36:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:58.206 20:36:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:59.592 20:36:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:59.593 20:36:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:59.593 20:36:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:59.593 20:36:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:59.593 20:36:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.143 20:36:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:03.530 20:36:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:03.530 20:36:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:23:03.530 20:36:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:03.530 20:36:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:03.530 20:36:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.445 20:36:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:07.363 20:36:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:07.363 20:36:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:23:07.363 20:36:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:07.363 20:36:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:07.363 20:36:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.320 20:36:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:11.236 20:36:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:11.236 20:36:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:23:11.236 20:36:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:11.236 20:36:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:11.236 20:36:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.153 20:36:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:15.069 20:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:15.069 20:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:23:15.069 20:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.069 20:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:15.070 20:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.985 20:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:18.904 20:36:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:18.905 20:36:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:23:18.905 20:36:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:18.905 20:36:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:18.905 20:36:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:23:20.819 20:36:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:20.819 20:36:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:20.819 20:36:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:23:20.819 20:36:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:20.819 20:36:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:20.819 20:36:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:23:20.820 20:36:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:20.820 [global] 00:23:20.820 thread=1 00:23:20.820 invalidate=1 00:23:20.820 rw=read 00:23:20.820 time_based=1 00:23:20.820 runtime=10 00:23:20.820 ioengine=libaio 00:23:20.820 direct=1 00:23:20.820 bs=262144 00:23:20.820 iodepth=64 00:23:20.820 norandommap=1 00:23:20.820 numjobs=1 00:23:20.820 00:23:20.820 [job0] 00:23:20.820 filename=/dev/nvme0n1 00:23:20.820 [job1] 00:23:20.820 filename=/dev/nvme10n1 00:23:20.820 [job2] 00:23:20.820 filename=/dev/nvme1n1 00:23:20.820 [job3] 00:23:20.820 filename=/dev/nvme2n1 00:23:20.820 [job4] 00:23:20.820 filename=/dev/nvme3n1 00:23:20.820 [job5] 00:23:20.820 filename=/dev/nvme4n1 00:23:20.820 [job6] 00:23:20.820 filename=/dev/nvme5n1 00:23:20.820 [job7] 00:23:20.820 filename=/dev/nvme6n1 00:23:20.820 [job8] 00:23:20.820 filename=/dev/nvme7n1 00:23:20.820 [job9] 00:23:20.820 filename=/dev/nvme8n1 00:23:20.820 [job10] 00:23:20.820 filename=/dev/nvme9n1 00:23:21.101 Could not set queue depth (nvme0n1) 00:23:21.101 Could not set queue depth (nvme10n1) 00:23:21.102 Could not set queue depth (nvme1n1) 00:23:21.102 Could not set queue depth (nvme2n1) 00:23:21.102 Could not set queue depth (nvme3n1) 00:23:21.102 Could not set queue depth (nvme4n1) 00:23:21.102 Could not set queue depth (nvme5n1) 00:23:21.102 Could not set queue depth (nvme6n1) 00:23:21.102 Could not set queue depth (nvme7n1) 00:23:21.102 Could not set queue depth (nvme8n1) 00:23:21.102 Could not set queue depth (nvme9n1) 00:23:21.369 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:21.369 fio-3.35 00:23:21.369 Starting 11 threads 00:23:33.601 00:23:33.601 job0: (groupid=0, jobs=1): err= 0: pid=3117492: Mon May 13 20:36:47 2024 00:23:33.601 read: IOPS=993, BW=248MiB/s (260MB/s)(2501MiB/10074msec) 00:23:33.601 slat (usec): min=6, max=31329, avg=810.16, stdev=2350.18 00:23:33.601 clat (msec): min=2, max=156, avg=63.52, stdev=21.88 00:23:33.601 lat (msec): min=2, max=161, avg=64.33, stdev=22.27 00:23:33.601 clat percentiles (msec): 00:23:33.601 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 47], 00:23:33.601 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 74], 00:23:33.601 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 92], 00:23:33.601 | 99.00th=[ 106], 99.50th=[ 123], 99.90th=[ 155], 99.95th=[ 157], 00:23:33.601 | 99.99th=[ 157] 00:23:33.601 bw ( KiB/s): min=197120, max=399360, per=9.81%, avg=254515.20, stdev=59766.20, samples=20 00:23:33.601 iops : min= 770, max= 1560, avg=994.20, stdev=233.46, samples=20 00:23:33.601 lat (msec) : 4=0.02%, 10=1.21%, 20=3.60%, 50=19.43%, 100=73.85% 00:23:33.601 lat (msec) : 250=1.89% 00:23:33.601 cpu : usr=0.32%, sys=2.96%, ctx=2684, majf=0, minf=4097 00:23:33.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.601 issued rwts: total=10005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.601 job1: (groupid=0, jobs=1): err= 0: pid=3117507: Mon May 13 20:36:47 2024 00:23:33.601 read: IOPS=912, BW=228MiB/s (239MB/s)(2301MiB/10086msec) 00:23:33.601 slat (usec): min=8, max=119257, avg=956.18, stdev=3812.84 00:23:33.601 clat (msec): min=3, max=255, avg=69.07, stdev=38.18 00:23:33.601 lat (msec): min=3, max=264, avg=70.02, stdev=38.78 00:23:33.601 clat percentiles (msec): 00:23:33.601 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 33], 00:23:33.601 | 30.00th=[ 39], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 71], 00:23:33.601 | 70.00th=[ 87], 80.00th=[ 106], 90.00th=[ 125], 95.00th=[ 146], 00:23:33.601 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 207], 99.95th=[ 211], 00:23:33.601 | 99.99th=[ 255] 00:23:33.601 bw ( KiB/s): min=94720, max=445952, per=9.02%, avg=233958.40, stdev=104966.44, samples=20 00:23:33.601 iops : min= 370, max= 1742, avg=913.90, stdev=410.03, samples=20 00:23:33.601 lat (msec) : 4=0.03%, 10=1.39%, 20=2.00%, 50=33.67%, 100=39.89% 00:23:33.601 lat (msec) : 250=22.99%, 500=0.02% 00:23:33.601 cpu : usr=0.32%, sys=3.11%, ctx=2259, majf=0, minf=3535 00:23:33.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.601 issued rwts: total=9203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.601 job2: (groupid=0, jobs=1): err= 0: pid=3117528: Mon May 13 20:36:47 2024 00:23:33.601 read: IOPS=921, BW=230MiB/s (242MB/s)(2309MiB/10016msec) 00:23:33.601 slat (usec): min=7, max=108132, avg=931.75, stdev=3905.04 00:23:33.601 clat (msec): min=2, max=249, avg=68.38, stdev=36.58 00:23:33.601 lat (msec): min=3, max=249, avg=69.32, stdev=37.19 00:23:33.601 clat percentiles (msec): 00:23:33.601 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 28], 20.00th=[ 33], 00:23:33.601 | 30.00th=[ 41], 40.00th=[ 52], 50.00th=[ 65], 60.00th=[ 78], 00:23:33.601 | 70.00th=[ 86], 80.00th=[ 102], 90.00th=[ 120], 95.00th=[ 140], 00:23:33.601 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 184], 99.95th=[ 239], 00:23:33.601 | 99.99th=[ 249] 00:23:33.601 bw ( KiB/s): min=134656, max=458752, per=9.05%, avg=234777.60, stdev=74612.42, samples=20 00:23:33.602 iops : min= 526, max= 1792, avg=917.10, stdev=291.45, samples=20 00:23:33.602 lat (msec) : 4=0.02%, 10=1.01%, 20=3.58%, 50=34.44%, 100=39.96% 00:23:33.602 lat (msec) : 250=20.99% 00:23:33.602 cpu : usr=0.34%, sys=3.12%, ctx=2307, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=9234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job3: (groupid=0, jobs=1): err= 0: pid=3117540: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=820, BW=205MiB/s (215MB/s)(2066MiB/10075msec) 00:23:33.602 slat (usec): min=6, max=120081, avg=1045.85, stdev=4339.58 00:23:33.602 clat (msec): min=3, max=233, avg=76.88, stdev=35.34 00:23:33.602 lat (msec): min=3, max=277, avg=77.93, stdev=36.08 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 48], 00:23:33.602 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 73], 60.00th=[ 80], 00:23:33.602 | 70.00th=[ 90], 80.00th=[ 106], 90.00th=[ 133], 95.00th=[ 148], 00:23:33.602 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 188], 99.95th=[ 230], 00:23:33.602 | 99.99th=[ 234] 00:23:33.602 bw ( KiB/s): min=109568, max=301056, per=8.10%, avg=209970.65, stdev=56314.41, samples=20 00:23:33.602 iops : min= 428, max= 1176, avg=820.15, stdev=219.94, samples=20 00:23:33.602 lat (msec) : 4=0.08%, 10=0.92%, 20=2.03%, 50=20.63%, 100=52.38% 00:23:33.602 lat (msec) : 250=23.95% 00:23:33.602 cpu : usr=0.36%, sys=2.52%, ctx=2128, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=8264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job4: (groupid=0, jobs=1): err= 0: pid=3117546: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=952, BW=238MiB/s (250MB/s)(2388MiB/10027msec) 00:23:33.602 slat (usec): min=7, max=115885, avg=956.25, stdev=4071.49 00:23:33.602 clat (msec): min=9, max=280, avg=66.11, stdev=33.52 00:23:33.602 lat (msec): min=9, max=280, avg=67.07, stdev=34.11 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 43], 00:23:33.602 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 61], 00:23:33.602 | 70.00th=[ 69], 80.00th=[ 87], 90.00th=[ 118], 95.00th=[ 146], 00:23:33.602 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 251], 99.95th=[ 259], 00:23:33.602 | 99.99th=[ 279] 00:23:33.602 bw ( KiB/s): min=102400, max=388608, per=9.37%, avg=242918.40, stdev=89572.43, samples=20 00:23:33.602 iops : min= 400, max= 1518, avg=948.90, stdev=349.89, samples=20 00:23:33.602 lat (msec) : 10=0.04%, 20=1.04%, 50=40.78%, 100=42.49%, 250=15.54% 00:23:33.602 lat (msec) : 500=0.12% 00:23:33.602 cpu : usr=0.30%, sys=3.28%, ctx=2214, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=9552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job5: (groupid=0, jobs=1): err= 0: pid=3117571: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=1015, BW=254MiB/s (266MB/s)(2553MiB/10062msec) 00:23:33.602 slat (usec): min=6, max=22947, avg=933.56, stdev=2317.04 00:23:33.602 clat (msec): min=15, max=147, avg=62.07, stdev=20.98 00:23:33.602 lat (msec): min=15, max=147, avg=63.01, stdev=21.30 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 40], 00:23:33.602 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 66], 60.00th=[ 73], 00:23:33.602 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 92], 00:23:33.602 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 127], 99.95th=[ 131], 00:23:33.602 | 99.99th=[ 133] 00:23:33.602 bw ( KiB/s): min=188928, max=468480, per=10.02%, avg=259840.00, stdev=86259.54, samples=20 00:23:33.602 iops : min= 738, max= 1830, avg=1015.00, stdev=336.95, samples=20 00:23:33.602 lat (msec) : 20=0.22%, 50=32.17%, 100=65.77%, 250=1.84% 00:23:33.602 cpu : usr=0.38%, sys=3.42%, ctx=2269, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=10213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job6: (groupid=0, jobs=1): err= 0: pid=3117583: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=1249, BW=312MiB/s (327MB/s)(3148MiB/10079msec) 00:23:33.602 slat (usec): min=6, max=34287, avg=791.09, stdev=2011.23 00:23:33.602 clat (msec): min=11, max=183, avg=50.37, stdev=22.81 00:23:33.602 lat (msec): min=11, max=183, avg=51.17, stdev=23.14 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 25], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:23:33.602 | 30.00th=[ 36], 40.00th=[ 43], 50.00th=[ 46], 60.00th=[ 49], 00:23:33.602 | 70.00th=[ 54], 80.00th=[ 64], 90.00th=[ 89], 95.00th=[ 103], 00:23:33.602 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 161], 99.95th=[ 169], 00:23:33.602 | 99.99th=[ 184] 00:23:33.602 bw ( KiB/s): min=157184, max=505344, per=12.37%, avg=320735.30, stdev=113336.42, samples=20 00:23:33.602 iops : min= 614, max= 1974, avg=1252.85, stdev=442.70, samples=20 00:23:33.602 lat (msec) : 20=0.33%, 50=63.35%, 100=30.33%, 250=5.98% 00:23:33.602 cpu : usr=0.53%, sys=3.67%, ctx=2649, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=12590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job7: (groupid=0, jobs=1): err= 0: pid=3117593: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=665, BW=166MiB/s (174MB/s)(1676MiB/10073msec) 00:23:33.602 slat (usec): min=6, max=82630, avg=1343.39, stdev=4183.52 00:23:33.602 clat (msec): min=4, max=229, avg=94.72, stdev=32.27 00:23:33.602 lat (msec): min=4, max=229, avg=96.07, stdev=32.88 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 13], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 67], 00:23:33.602 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 102], 00:23:33.602 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 142], 95.00th=[ 155], 00:23:33.602 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 199], 99.95th=[ 199], 00:23:33.602 | 99.99th=[ 230] 00:23:33.602 bw ( KiB/s): min=98304, max=282624, per=6.55%, avg=169967.50, stdev=47058.47, samples=20 00:23:33.602 iops : min= 384, max= 1104, avg=663.90, stdev=183.83, samples=20 00:23:33.602 lat (msec) : 10=0.54%, 20=1.01%, 50=4.77%, 100=51.59%, 250=42.09% 00:23:33.602 cpu : usr=0.23%, sys=2.18%, ctx=1698, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=6703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job8: (groupid=0, jobs=1): err= 0: pid=3117623: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=677, BW=169MiB/s (178MB/s)(1707MiB/10080msec) 00:23:33.602 slat (usec): min=7, max=90738, avg=1203.01, stdev=4341.41 00:23:33.602 clat (msec): min=4, max=247, avg=93.13, stdev=32.68 00:23:33.602 lat (msec): min=4, max=247, avg=94.33, stdev=33.31 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 16], 5.00th=[ 42], 10.00th=[ 55], 20.00th=[ 70], 00:23:33.602 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 90], 60.00th=[ 99], 00:23:33.602 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 142], 95.00th=[ 153], 00:23:33.602 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 199], 99.95th=[ 213], 00:23:33.602 | 99.99th=[ 249] 00:23:33.602 bw ( KiB/s): min=112128, max=225280, per=6.68%, avg=173142.55, stdev=36934.76, samples=20 00:23:33.602 iops : min= 438, max= 880, avg=676.30, stdev=144.29, samples=20 00:23:33.602 lat (msec) : 10=0.31%, 20=1.44%, 50=6.63%, 100=53.95%, 250=37.67% 00:23:33.602 cpu : usr=0.27%, sys=2.27%, ctx=1861, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=6828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job9: (groupid=0, jobs=1): err= 0: pid=3117632: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=983, BW=246MiB/s (258MB/s)(2465MiB/10021msec) 00:23:33.602 slat (usec): min=8, max=53027, avg=930.35, stdev=2498.37 00:23:33.602 clat (msec): min=8, max=140, avg=64.05, stdev=17.35 00:23:33.602 lat (msec): min=8, max=156, avg=64.98, stdev=17.54 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 26], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 50], 00:23:33.602 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 68], 00:23:33.602 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 94], 00:23:33.602 | 99.00th=[ 105], 99.50th=[ 115], 99.90th=[ 138], 99.95th=[ 140], 00:23:33.602 | 99.99th=[ 140] 00:23:33.602 bw ( KiB/s): min=197120, max=336896, per=9.67%, avg=250826.45, stdev=41628.93, samples=20 00:23:33.602 iops : min= 770, max= 1316, avg=979.75, stdev=162.63, samples=20 00:23:33.602 lat (msec) : 10=0.01%, 20=0.55%, 50=21.61%, 100=75.54%, 250=2.29% 00:23:33.602 cpu : usr=0.44%, sys=3.30%, ctx=2229, majf=0, minf=4097 00:23:33.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:33.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.602 issued rwts: total=9860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.602 job10: (groupid=0, jobs=1): err= 0: pid=3117642: Mon May 13 20:36:47 2024 00:23:33.602 read: IOPS=964, BW=241MiB/s (253MB/s)(2431MiB/10080msec) 00:23:33.602 slat (usec): min=6, max=83248, avg=911.10, stdev=2888.69 00:23:33.602 clat (msec): min=6, max=213, avg=65.36, stdev=32.17 00:23:33.602 lat (msec): min=7, max=213, avg=66.27, stdev=32.62 00:23:33.602 clat percentiles (msec): 00:23:33.602 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 34], 00:23:33.602 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 64], 60.00th=[ 74], 00:23:33.603 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 106], 95.00th=[ 133], 00:23:33.603 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 184], 00:23:33.603 | 99.99th=[ 213] 00:23:33.603 bw ( KiB/s): min=117248, max=470016, per=9.54%, avg=247321.60, stdev=102521.26, samples=20 00:23:33.603 iops : min= 458, max= 1836, avg=966.10, stdev=400.47, samples=20 00:23:33.603 lat (msec) : 10=0.17%, 20=1.00%, 50=37.03%, 100=50.09%, 250=11.70% 00:23:33.603 cpu : usr=0.35%, sys=3.22%, ctx=2344, majf=0, minf=4097 00:23:33.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:33.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:33.603 issued rwts: total=9724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:33.603 00:23:33.603 Run status group 0 (all jobs): 00:23:33.603 READ: bw=2533MiB/s (2656MB/s), 166MiB/s-312MiB/s (174MB/s-327MB/s), io=24.9GiB (26.8GB), run=10016-10086msec 00:23:33.603 00:23:33.603 Disk stats (read/write): 00:23:33.603 nvme0n1: ios=19674/0, merge=0/0, ticks=1220805/0, in_queue=1220805, util=96.43% 00:23:33.603 nvme10n1: ios=18101/0, merge=0/0, ticks=1214547/0, in_queue=1214547, util=96.73% 00:23:33.603 nvme1n1: ios=17780/0, merge=0/0, ticks=1222117/0, in_queue=1222117, util=97.06% 00:23:33.603 nvme2n1: ios=16219/0, merge=0/0, ticks=1216236/0, in_queue=1216236, util=97.31% 00:23:33.603 nvme3n1: ios=18540/0, merge=0/0, ticks=1218949/0, in_queue=1218949, util=97.38% 00:23:33.603 nvme4n1: ios=20058/0, merge=0/0, ticks=1215390/0, in_queue=1215390, util=97.74% 00:23:33.603 nvme5n1: ios=24873/0, merge=0/0, ticks=1213843/0, in_queue=1213843, util=97.99% 00:23:33.603 nvme6n1: ios=13112/0, merge=0/0, ticks=1215013/0, in_queue=1215013, util=98.15% 00:23:33.603 nvme7n1: ios=13374/0, merge=0/0, ticks=1217195/0, in_queue=1217195, util=98.74% 00:23:33.603 nvme8n1: ios=19185/0, merge=0/0, ticks=1219861/0, in_queue=1219861, util=98.92% 00:23:33.603 nvme9n1: ios=19094/0, merge=0/0, ticks=1217837/0, in_queue=1217837, util=99.16% 00:23:33.603 20:36:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:33.603 [global] 00:23:33.603 thread=1 00:23:33.603 invalidate=1 00:23:33.603 rw=randwrite 00:23:33.603 time_based=1 00:23:33.603 runtime=10 00:23:33.603 ioengine=libaio 00:23:33.603 direct=1 00:23:33.603 bs=262144 00:23:33.603 iodepth=64 00:23:33.603 norandommap=1 00:23:33.603 numjobs=1 00:23:33.603 00:23:33.603 [job0] 00:23:33.603 filename=/dev/nvme0n1 00:23:33.603 [job1] 00:23:33.603 filename=/dev/nvme10n1 00:23:33.603 [job2] 00:23:33.603 filename=/dev/nvme1n1 00:23:33.603 [job3] 00:23:33.603 filename=/dev/nvme2n1 00:23:33.603 [job4] 00:23:33.603 filename=/dev/nvme3n1 00:23:33.603 [job5] 00:23:33.603 filename=/dev/nvme4n1 00:23:33.603 [job6] 00:23:33.603 filename=/dev/nvme5n1 00:23:33.603 [job7] 00:23:33.603 filename=/dev/nvme6n1 00:23:33.603 [job8] 00:23:33.603 filename=/dev/nvme7n1 00:23:33.603 [job9] 00:23:33.603 filename=/dev/nvme8n1 00:23:33.603 [job10] 00:23:33.603 filename=/dev/nvme9n1 00:23:33.603 Could not set queue depth (nvme0n1) 00:23:33.603 Could not set queue depth (nvme10n1) 00:23:33.603 Could not set queue depth (nvme1n1) 00:23:33.603 Could not set queue depth (nvme2n1) 00:23:33.603 Could not set queue depth (nvme3n1) 00:23:33.603 Could not set queue depth (nvme4n1) 00:23:33.603 Could not set queue depth (nvme5n1) 00:23:33.603 Could not set queue depth (nvme6n1) 00:23:33.603 Could not set queue depth (nvme7n1) 00:23:33.603 Could not set queue depth (nvme8n1) 00:23:33.603 Could not set queue depth (nvme9n1) 00:23:33.603 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:33.603 fio-3.35 00:23:33.603 Starting 11 threads 00:23:43.605 00:23:43.605 job0: (groupid=0, jobs=1): err= 0: pid=3119655: Mon May 13 20:36:58 2024 00:23:43.605 write: IOPS=660, BW=165MiB/s (173MB/s)(1661MiB/10056msec); 0 zone resets 00:23:43.605 slat (usec): min=25, max=49966, avg=1480.31, stdev=2729.43 00:23:43.605 clat (msec): min=3, max=168, avg=95.36, stdev=25.11 00:23:43.605 lat (msec): min=3, max=168, avg=96.84, stdev=25.42 00:23:43.605 clat percentiles (msec): 00:23:43.605 | 1.00th=[ 39], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 72], 00:23:43.605 | 30.00th=[ 77], 40.00th=[ 94], 50.00th=[ 101], 60.00th=[ 105], 00:23:43.605 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 123], 95.00th=[ 144], 00:23:43.605 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 167], 99.95th=[ 167], 00:23:43.605 | 99.99th=[ 169] 00:23:43.605 bw ( KiB/s): min=106496, max=267264, per=8.32%, avg=168473.60, stdev=44433.90, samples=20 00:23:43.605 iops : min= 416, max= 1044, avg=658.10, stdev=173.57, samples=20 00:23:43.606 lat (msec) : 4=0.02%, 10=0.14%, 20=0.33%, 50=0.95%, 100=46.90% 00:23:43.606 lat (msec) : 250=51.67% 00:23:43.606 cpu : usr=1.43%, sys=2.31%, ctx=1826, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,6644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job1: (groupid=0, jobs=1): err= 0: pid=3119681: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=751, BW=188MiB/s (197MB/s)(1895MiB/10093msec); 0 zone resets 00:23:43.606 slat (usec): min=23, max=17300, avg=1230.16, stdev=2305.17 00:23:43.606 clat (msec): min=3, max=184, avg=83.96, stdev=21.86 00:23:43.606 lat (msec): min=3, max=184, avg=85.19, stdev=22.18 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 25], 5.00th=[ 55], 10.00th=[ 68], 20.00th=[ 73], 00:23:43.606 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:23:43.606 | 70.00th=[ 84], 80.00th=[ 99], 90.00th=[ 123], 95.00th=[ 130], 00:23:43.606 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 174], 99.95th=[ 180], 00:23:43.606 | 99.99th=[ 186] 00:23:43.606 bw ( KiB/s): min=124928, max=240640, per=9.51%, avg=192435.20, stdev=33349.07, samples=20 00:23:43.606 iops : min= 488, max= 940, avg=751.70, stdev=130.27, samples=20 00:23:43.606 lat (msec) : 4=0.01%, 10=0.21%, 20=0.49%, 50=3.83%, 100=77.19% 00:23:43.606 lat (msec) : 250=18.27% 00:23:43.606 cpu : usr=1.56%, sys=2.35%, ctx=2399, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,7580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job2: (groupid=0, jobs=1): err= 0: pid=3119702: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=802, BW=201MiB/s (210MB/s)(2021MiB/10065msec); 0 zone resets 00:23:43.606 slat (usec): min=23, max=14121, avg=1194.31, stdev=2112.31 00:23:43.606 clat (msec): min=16, max=148, avg=78.48, stdev=13.37 00:23:43.606 lat (msec): min=16, max=150, avg=79.68, stdev=13.46 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 42], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 72], 00:23:43.606 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 79], 00:23:43.606 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 103], 00:23:43.606 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 148], 00:23:43.606 | 99.99th=[ 148] 00:23:43.606 bw ( KiB/s): min=154112, max=256000, per=10.14%, avg=205286.40, stdev=24291.11, samples=20 00:23:43.606 iops : min= 602, max= 1000, avg=801.90, stdev=94.89, samples=20 00:23:43.606 lat (msec) : 20=0.07%, 50=1.18%, 100=91.10%, 250=7.65% 00:23:43.606 cpu : usr=1.69%, sys=2.35%, ctx=2254, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,8082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job3: (groupid=0, jobs=1): err= 0: pid=3119714: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=588, BW=147MiB/s (154MB/s)(1489MiB/10115msec); 0 zone resets 00:23:43.606 slat (usec): min=26, max=15343, avg=1675.16, stdev=2963.39 00:23:43.606 clat (msec): min=16, max=233, avg=106.99, stdev=26.00 00:23:43.606 lat (msec): min=16, max=233, avg=108.67, stdev=26.25 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 61], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 74], 00:23:43.606 | 30.00th=[ 90], 40.00th=[ 103], 50.00th=[ 121], 60.00th=[ 124], 00:23:43.606 | 70.00th=[ 128], 80.00th=[ 129], 90.00th=[ 131], 95.00th=[ 132], 00:23:43.606 | 99.00th=[ 148], 99.50th=[ 176], 99.90th=[ 226], 99.95th=[ 226], 00:23:43.606 | 99.99th=[ 234] 00:23:43.606 bw ( KiB/s): min=120832, max=225280, per=7.45%, avg=150853.75, stdev=36271.44, samples=20 00:23:43.606 iops : min= 472, max= 880, avg=589.25, stdev=141.66, samples=20 00:23:43.606 lat (msec) : 20=0.07%, 50=0.40%, 100=34.09%, 250=65.44% 00:23:43.606 cpu : usr=1.50%, sys=1.83%, ctx=1537, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,5955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job4: (groupid=0, jobs=1): err= 0: pid=3119720: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=564, BW=141MiB/s (148MB/s)(1428MiB/10113msec); 0 zone resets 00:23:43.606 slat (usec): min=21, max=21773, avg=1698.84, stdev=3065.56 00:23:43.606 clat (msec): min=6, max=248, avg=111.61, stdev=24.39 00:23:43.606 lat (msec): min=6, max=248, avg=113.31, stdev=24.62 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 46], 5.00th=[ 74], 10.00th=[ 81], 20.00th=[ 86], 00:23:43.606 | 30.00th=[ 99], 40.00th=[ 108], 50.00th=[ 122], 60.00th=[ 126], 00:23:43.606 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 138], 00:23:43.606 | 99.00th=[ 148], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 241], 00:23:43.606 | 99.99th=[ 249] 00:23:43.606 bw ( KiB/s): min=119296, max=198656, per=7.14%, avg=144582.15, stdev=27358.25, samples=20 00:23:43.606 iops : min= 466, max= 776, avg=564.75, stdev=106.83, samples=20 00:23:43.606 lat (msec) : 10=0.07%, 20=0.14%, 50=1.07%, 100=29.72%, 250=69.00% 00:23:43.606 cpu : usr=1.43%, sys=1.77%, ctx=1661, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,5710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job5: (groupid=0, jobs=1): err= 0: pid=3119733: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=1136, BW=284MiB/s (298MB/s)(2867MiB/10092msec); 0 zone resets 00:23:43.606 slat (usec): min=15, max=36058, avg=843.60, stdev=1592.43 00:23:43.606 clat (msec): min=7, max=185, avg=55.41, stdev=17.52 00:23:43.606 lat (msec): min=7, max=185, avg=56.26, stdev=17.76 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 44], 00:23:43.606 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 54], 00:23:43.606 | 70.00th=[ 56], 80.00th=[ 65], 90.00th=[ 81], 95.00th=[ 91], 00:23:43.606 | 99.00th=[ 113], 99.50th=[ 123], 99.90th=[ 167], 99.95th=[ 174], 00:23:43.606 | 99.99th=[ 180] 00:23:43.606 bw ( KiB/s): min=154112, max=373248, per=14.43%, avg=291993.60, stdev=66499.88, samples=20 00:23:43.606 iops : min= 602, max= 1458, avg=1140.60, stdev=259.77, samples=20 00:23:43.606 lat (msec) : 10=0.04%, 20=0.31%, 50=45.57%, 100=51.20%, 250=2.89% 00:23:43.606 cpu : usr=2.82%, sys=3.78%, ctx=3079, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,11469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job6: (groupid=0, jobs=1): err= 0: pid=3119744: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=545, BW=136MiB/s (143MB/s)(1380MiB/10116msec); 0 zone resets 00:23:43.606 slat (usec): min=26, max=98003, avg=1794.60, stdev=3550.94 00:23:43.606 clat (msec): min=22, max=253, avg=115.06, stdev=23.59 00:23:43.606 lat (msec): min=22, max=253, avg=116.85, stdev=23.70 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 43], 5.00th=[ 81], 10.00th=[ 86], 20.00th=[ 95], 00:23:43.606 | 30.00th=[ 103], 40.00th=[ 118], 50.00th=[ 124], 60.00th=[ 128], 00:23:43.606 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 134], 95.00th=[ 140], 00:23:43.606 | 99.00th=[ 169], 99.50th=[ 197], 99.90th=[ 245], 99.95th=[ 245], 00:23:43.606 | 99.99th=[ 253] 00:23:43.606 bw ( KiB/s): min=119296, max=178176, per=6.90%, avg=139648.00, stdev=20485.30, samples=20 00:23:43.606 iops : min= 466, max= 696, avg=545.50, stdev=80.02, samples=20 00:23:43.606 lat (msec) : 50=2.01%, 100=23.07%, 250=74.88%, 500=0.04% 00:23:43.606 cpu : usr=1.09%, sys=1.68%, ctx=1447, majf=0, minf=1 00:23:43.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:43.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.606 issued rwts: total=0,5518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.606 job7: (groupid=0, jobs=1): err= 0: pid=3119754: Mon May 13 20:36:58 2024 00:23:43.606 write: IOPS=542, BW=136MiB/s (142MB/s)(1373MiB/10115msec); 0 zone resets 00:23:43.606 slat (usec): min=25, max=24454, avg=1724.93, stdev=3105.29 00:23:43.606 clat (msec): min=16, max=232, avg=116.11, stdev=18.01 00:23:43.606 lat (msec): min=16, max=232, avg=117.83, stdev=18.08 00:23:43.606 clat percentiles (msec): 00:23:43.606 | 1.00th=[ 59], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 102], 00:23:43.606 | 30.00th=[ 105], 40.00th=[ 112], 50.00th=[ 120], 60.00th=[ 124], 00:23:43.606 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 132], 95.00th=[ 138], 00:23:43.606 | 99.00th=[ 155], 99.50th=[ 176], 99.90th=[ 226], 99.95th=[ 226], 00:23:43.606 | 99.99th=[ 232] 00:23:43.606 bw ( KiB/s): min=124928, max=163840, per=6.87%, avg=138982.40, stdev=13206.36, samples=20 00:23:43.606 iops : min= 488, max= 640, avg=542.90, stdev=51.59, samples=20 00:23:43.606 lat (msec) : 20=0.07%, 50=0.56%, 100=13.75%, 250=85.62% 00:23:43.606 cpu : usr=1.33%, sys=1.44%, ctx=1658, majf=0, minf=1 00:23:43.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:43.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.607 issued rwts: total=0,5492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.607 job8: (groupid=0, jobs=1): err= 0: pid=3119782: Mon May 13 20:36:58 2024 00:23:43.607 write: IOPS=839, BW=210MiB/s (220MB/s)(2118MiB/10092msec); 0 zone resets 00:23:43.607 slat (usec): min=22, max=64687, avg=1019.14, stdev=2243.24 00:23:43.607 clat (msec): min=3, max=184, avg=75.16, stdev=25.47 00:23:43.607 lat (msec): min=3, max=184, avg=76.18, stdev=25.85 00:23:43.607 clat percentiles (msec): 00:23:43.607 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 60], 00:23:43.607 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 77], 00:23:43.607 | 70.00th=[ 80], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 114], 00:23:43.607 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:23:43.607 | 99.99th=[ 184] 00:23:43.607 bw ( KiB/s): min=151552, max=283136, per=10.64%, avg=215315.75, stdev=39663.66, samples=20 00:23:43.607 iops : min= 592, max= 1106, avg=841.05, stdev=154.95, samples=20 00:23:43.607 lat (msec) : 4=0.02%, 10=0.51%, 20=2.09%, 50=9.13%, 100=69.73% 00:23:43.607 lat (msec) : 250=18.52% 00:23:43.607 cpu : usr=1.88%, sys=2.65%, ctx=3361, majf=0, minf=1 00:23:43.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:43.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.607 issued rwts: total=0,8473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.607 job9: (groupid=0, jobs=1): err= 0: pid=3119795: Mon May 13 20:36:58 2024 00:23:43.607 write: IOPS=785, BW=196MiB/s (206MB/s)(1982MiB/10092msec); 0 zone resets 00:23:43.607 slat (usec): min=21, max=13213, avg=1162.58, stdev=2203.87 00:23:43.607 clat (msec): min=9, max=205, avg=80.29, stdev=21.98 00:23:43.607 lat (msec): min=9, max=205, avg=81.46, stdev=22.27 00:23:43.607 clat percentiles (msec): 00:23:43.607 | 1.00th=[ 22], 5.00th=[ 52], 10.00th=[ 66], 20.00th=[ 73], 00:23:43.607 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 79], 00:23:43.607 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 102], 95.00th=[ 126], 00:23:43.607 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 192], 99.95th=[ 199], 00:23:43.607 | 99.99th=[ 205] 00:23:43.607 bw ( KiB/s): min=110592, max=257024, per=9.95%, avg=201318.40, stdev=32442.85, samples=20 00:23:43.607 iops : min= 432, max= 1004, avg=786.40, stdev=126.73, samples=20 00:23:43.607 lat (msec) : 10=0.05%, 20=0.76%, 50=3.97%, 100=83.32%, 250=11.90% 00:23:43.607 cpu : usr=1.81%, sys=2.16%, ctx=2625, majf=0, minf=1 00:23:43.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:43.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.607 issued rwts: total=0,7927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.607 job10: (groupid=0, jobs=1): err= 0: pid=3119805: Mon May 13 20:36:58 2024 00:23:43.607 write: IOPS=706, BW=177MiB/s (185MB/s)(1780MiB/10073msec); 0 zone resets 00:23:43.607 slat (usec): min=22, max=23779, avg=1299.38, stdev=2493.51 00:23:43.607 clat (msec): min=8, max=171, avg=89.22, stdev=24.60 00:23:43.607 lat (msec): min=8, max=171, avg=90.52, stdev=24.96 00:23:43.607 clat percentiles (msec): 00:23:43.607 | 1.00th=[ 26], 5.00th=[ 49], 10.00th=[ 66], 20.00th=[ 75], 00:23:43.607 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 92], 00:23:43.607 | 70.00th=[ 103], 80.00th=[ 107], 90.00th=[ 116], 95.00th=[ 140], 00:23:43.607 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 167], 99.95th=[ 169], 00:23:43.607 | 99.99th=[ 171] 00:23:43.607 bw ( KiB/s): min=106496, max=239616, per=8.93%, avg=180659.20, stdev=37436.78, samples=20 00:23:43.607 iops : min= 416, max= 936, avg=705.70, stdev=146.24, samples=20 00:23:43.607 lat (msec) : 10=0.03%, 20=0.62%, 50=4.51%, 100=61.36%, 250=33.48% 00:23:43.607 cpu : usr=1.63%, sys=2.08%, ctx=2369, majf=0, minf=1 00:23:43.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:43.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:43.607 issued rwts: total=0,7120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.607 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:43.607 00:23:43.607 Run status group 0 (all jobs): 00:23:43.607 WRITE: bw=1976MiB/s (2072MB/s), 136MiB/s-284MiB/s (142MB/s-298MB/s), io=19.5GiB (21.0GB), run=10056-10116msec 00:23:43.607 00:23:43.607 Disk stats (read/write): 00:23:43.607 nvme0n1: ios=49/12823, merge=0/0, ticks=85/1199627, in_queue=1199712, util=96.73% 00:23:43.607 nvme10n1: ios=46/15156, merge=0/0, ticks=94/1231998, in_queue=1232092, util=97.17% 00:23:43.607 nvme1n1: ios=0/15761, merge=0/0, ticks=0/1199988, in_queue=1199988, util=96.91% 00:23:43.607 nvme2n1: ios=43/11880, merge=0/0, ticks=890/1225941, in_queue=1226831, util=100.00% 00:23:43.607 nvme3n1: ios=0/11393, merge=0/0, ticks=0/1227713, in_queue=1227713, util=97.33% 00:23:43.607 nvme4n1: ios=50/22935, merge=0/0, ticks=1166/1226167, in_queue=1227333, util=99.85% 00:23:43.607 nvme5n1: ios=44/11004, merge=0/0, ticks=1966/1207670, in_queue=1209636, util=100.00% 00:23:43.607 nvme6n1: ios=0/10955, merge=0/0, ticks=0/1229627, in_queue=1229627, util=98.13% 00:23:43.607 nvme7n1: ios=43/16945, merge=0/0, ticks=1551/1231606, in_queue=1233157, util=99.86% 00:23:43.607 nvme8n1: ios=0/15549, merge=0/0, ticks=0/1201189, in_queue=1201189, util=98.88% 00:23:43.607 nvme9n1: ios=0/13871, merge=0/0, ticks=0/1202853, in_queue=1202853, util=99.05% 00:23:43.607 20:36:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:43.607 20:36:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:43.607 20:36:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:43.607 20:36:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:43.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:43.607 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:43.607 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:43.868 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:44.129 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.129 20:36:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:44.390 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.390 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:44.650 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.650 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:45.222 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.222 20:37:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:45.222 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:45.222 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:45.222 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:45.222 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:45.222 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:23:45.483 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:45.483 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:23:45.483 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:45.483 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:45.483 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:45.484 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.484 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:45.745 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:45.745 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.745 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:46.006 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.006 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.007 rmmod nvme_tcp 00:23:46.007 rmmod nvme_fabrics 00:23:46.007 rmmod nvme_keyring 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3108413 ']' 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3108413 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3108413 ']' 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3108413 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:46.007 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3108413 00:23:46.268 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:46.268 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:46.268 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3108413' 00:23:46.268 killing process with pid 3108413 00:23:46.268 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3108413 00:23:46.268 [2024-05-13 20:37:01.955696] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:46.268 20:37:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3108413 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.530 20:37:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.456 20:37:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.456 00:23:48.456 real 1m18.275s 00:23:48.456 user 4m50.365s 00:23:48.456 sys 0m23.447s 00:23:48.456 20:37:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:48.456 20:37:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:48.456 ************************************ 00:23:48.456 END TEST nvmf_multiconnection 00:23:48.456 ************************************ 00:23:48.456 20:37:04 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:48.456 20:37:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:48.456 20:37:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:48.456 20:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.456 ************************************ 00:23:48.456 START TEST nvmf_initiator_timeout 00:23:48.456 ************************************ 00:23:48.456 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:48.718 * Looking for test storage... 00:23:48.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.718 20:37:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:56.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:56.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.953 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:56.954 Found net devices under 0000:31:00.0: cvl_0_0 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:56.954 Found net devices under 0000:31:00.1: cvl_0_1 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:23:56.954 00:23:56.954 --- 10.0.0.2 ping statistics --- 00:23:56.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.954 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.469 ms 00:23:56.954 00:23:56.954 --- 10.0.0.1 ping statistics --- 00:23:56.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.954 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3126847 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3126847 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3126847 ']' 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.954 20:37:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:56.954 [2024-05-13 20:37:12.468521] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:23:56.954 [2024-05-13 20:37:12.468582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.954 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.954 [2024-05-13 20:37:12.546545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.954 [2024-05-13 20:37:12.621045] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.954 [2024-05-13 20:37:12.621085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.954 [2024-05-13 20:37:12.621092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.954 [2024-05-13 20:37:12.621099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.954 [2024-05-13 20:37:12.621105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.954 [2024-05-13 20:37:12.621263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.954 [2024-05-13 20:37:12.621395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.954 [2024-05-13 20:37:12.621637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.954 [2024-05-13 20:37:12.621642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.525 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.525 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:23:57.525 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.525 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 Malloc0 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 Delay0 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 [2024-05-13 20:37:13.333067] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:57.526 [2024-05-13 20:37:13.373127] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:57.526 [2024-05-13 20:37:13.373389] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.526 20:37:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:59.438 20:37:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:59.438 20:37:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:23:59.438 20:37:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.438 20:37:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:59.438 20:37:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3127640 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:01.350 20:37:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:01.350 [global] 00:24:01.350 thread=1 00:24:01.350 invalidate=1 00:24:01.350 rw=write 00:24:01.350 time_based=1 00:24:01.350 runtime=60 00:24:01.350 ioengine=libaio 00:24:01.350 direct=1 00:24:01.350 bs=4096 00:24:01.350 iodepth=1 00:24:01.350 norandommap=0 00:24:01.350 numjobs=1 00:24:01.350 00:24:01.350 verify_dump=1 00:24:01.350 verify_backlog=512 00:24:01.350 verify_state_save=0 00:24:01.350 do_verify=1 00:24:01.350 verify=crc32c-intel 00:24:01.350 [job0] 00:24:01.350 filename=/dev/nvme0n1 00:24:01.350 Could not set queue depth (nvme0n1) 00:24:01.350 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:01.350 fio-3.35 00:24:01.350 Starting 1 thread 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.653 true 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.653 true 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.653 true 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.653 true 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.653 20:37:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:07.202 true 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:07.202 true 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:07.202 true 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:07.202 true 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:07.202 20:37:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3127640 00:25:03.483 00:25:03.483 job0: (groupid=0, jobs=1): err= 0: pid=3127964: Mon May 13 20:38:17 2024 00:25:03.483 read: IOPS=68, BW=273KiB/s (280kB/s)(16.0MiB/60001msec) 00:25:03.483 slat (usec): min=8, max=6752, avg=30.00, stdev=147.08 00:25:03.483 clat (usec): min=752, max=42129k, avg=13819.81, stdev=658279.74 00:25:03.483 lat (usec): min=768, max=42129k, avg=13849.81, stdev=658279.67 00:25:03.483 clat percentiles (usec): 00:25:03.483 | 1.00th=[ 922], 5.00th=[ 1012], 10.00th=[ 1057], 00:25:03.483 | 20.00th=[ 1090], 30.00th=[ 1106], 40.00th=[ 1123], 00:25:03.483 | 50.00th=[ 1139], 60.00th=[ 1139], 70.00th=[ 1156], 00:25:03.483 | 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 41681], 00:25:03.483 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:25:03.483 | 99.95th=[ 42206], 99.99th=[17112761] 00:25:03.483 write: IOPS=76, BW=304KiB/s (311kB/s)(17.8MiB/60001msec); 0 zone resets 00:25:03.483 slat (nsec): min=7663, max=70806, avg=29477.23, stdev=10578.77 00:25:03.483 clat (usec): min=306, max=964, avg=672.71, stdev=101.48 00:25:03.483 lat (usec): min=341, max=997, avg=702.19, stdev=107.04 00:25:03.483 clat percentiles (usec): 00:25:03.483 | 1.00th=[ 429], 5.00th=[ 490], 10.00th=[ 529], 20.00th=[ 586], 00:25:03.483 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:25:03.483 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 832], 00:25:03.483 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 963], 00:25:03.483 | 99.99th=[ 963] 00:25:03.483 bw ( KiB/s): min= 328, max= 4096, per=100.00%, avg=2184.47, stdev=1243.70, samples=15 00:25:03.483 iops : min= 82, max= 1024, avg=546.07, stdev=311.01, samples=15 00:25:03.483 lat (usec) : 500=2.97%, 750=37.35%, 1000=14.52% 00:25:03.483 lat (msec) : 2=42.35%, 50=2.80%, >=2000=0.01% 00:25:03.483 cpu : usr=0.32%, sys=0.52%, ctx=8661, majf=0, minf=1 00:25:03.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:03.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.483 issued rwts: total=4096,4562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:03.483 00:25:03.483 Run status group 0 (all jobs): 00:25:03.483 READ: bw=273KiB/s (280kB/s), 273KiB/s-273KiB/s (280kB/s-280kB/s), io=16.0MiB (16.8MB), run=60001-60001msec 00:25:03.483 WRITE: bw=304KiB/s (311kB/s), 304KiB/s-304KiB/s (311kB/s-311kB/s), io=17.8MiB (18.7MB), run=60001-60001msec 00:25:03.483 00:25:03.483 Disk stats (read/write): 00:25:03.483 nvme0n1: ios=4195/4423, merge=0/0, ticks=14514/2519, in_queue=17033, util=100.00% 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:03.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:03.483 nvmf hotplug test: fio successful as expected 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.483 rmmod nvme_tcp 00:25:03.483 rmmod nvme_fabrics 00:25:03.483 rmmod nvme_keyring 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3126847 ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3126847 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3126847 ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3126847 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3126847 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3126847' 00:25:03.483 killing process with pid 3126847 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3126847 00:25:03.483 [2024-05-13 20:38:17.646367] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3126847 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.483 20:38:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.056 20:38:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.056 00:25:04.056 real 1m15.473s 00:25:04.056 user 4m37.786s 00:25:04.056 sys 0m7.717s 00:25:04.056 20:38:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:04.056 20:38:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.056 ************************************ 00:25:04.056 END TEST nvmf_initiator_timeout 00:25:04.056 ************************************ 00:25:04.056 20:38:19 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:25:04.056 20:38:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:25:04.056 20:38:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:25:04.056 20:38:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.056 20:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.195 20:38:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:12.196 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:12.196 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:12.196 Found net devices under 0000:31:00.0: cvl_0_0 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:12.196 Found net devices under 0000:31:00.1: cvl_0_1 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:25:12.196 20:38:27 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:12.196 20:38:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:12.196 20:38:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:12.196 20:38:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.196 ************************************ 00:25:12.196 START TEST nvmf_perf_adq 00:25:12.196 ************************************ 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:12.196 * Looking for test storage... 00:25:12.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:12.196 20:38:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:20.336 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:20.336 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.336 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:20.337 Found net devices under 0000:31:00.0: cvl_0_0 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:20.337 Found net devices under 0000:31:00.1: cvl_0_1 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:20.337 20:38:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:21.340 20:38:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:23.343 20:38:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:28.633 20:38:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.634 20:38:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:28.634 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:28.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:28.634 Found net devices under 0000:31:00.0: cvl_0_0 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:28.634 Found net devices under 0000:31:00.1: cvl_0_1 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.793 ms 00:25:28.634 00:25:28.634 --- 10.0.0.2 ping statistics --- 00:25:28.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.634 rtt min/avg/max/mdev = 0.793/0.793/0.793/0.000 ms 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:25:28.634 00:25:28.634 --- 10.0.0.1 ping statistics --- 00:25:28.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.634 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.634 20:38:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3150070 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3150070 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3150070 ']' 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:28.635 20:38:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:28.635 [2024-05-13 20:38:44.430169] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:25:28.635 [2024-05-13 20:38:44.430244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.635 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.635 [2024-05-13 20:38:44.508859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.896 [2024-05-13 20:38:44.583183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.896 [2024-05-13 20:38:44.583225] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.896 [2024-05-13 20:38:44.583232] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.896 [2024-05-13 20:38:44.583239] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.896 [2024-05-13 20:38:44.583244] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.896 [2024-05-13 20:38:44.583829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.896 [2024-05-13 20:38:44.583931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.896 [2024-05-13 20:38:44.584055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.896 [2024-05-13 20:38:44.584167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.470 [2024-05-13 20:38:45.381597] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.470 Malloc1 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.470 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:29.730 [2024-05-13 20:38:45.438174] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:29.730 [2024-05-13 20:38:45.438435] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3150193 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:29.730 20:38:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:29.730 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:25:31.646 "tick_rate": 2400000000, 00:25:31.646 "poll_groups": [ 00:25:31.646 { 00:25:31.646 "name": "nvmf_tgt_poll_group_000", 00:25:31.646 "admin_qpairs": 1, 00:25:31.646 "io_qpairs": 1, 00:25:31.646 "current_admin_qpairs": 1, 00:25:31.646 "current_io_qpairs": 1, 00:25:31.646 "pending_bdev_io": 0, 00:25:31.646 "completed_nvme_io": 20972, 00:25:31.646 "transports": [ 00:25:31.646 { 00:25:31.646 "trtype": "TCP" 00:25:31.646 } 00:25:31.646 ] 00:25:31.646 }, 00:25:31.646 { 00:25:31.646 "name": "nvmf_tgt_poll_group_001", 00:25:31.646 "admin_qpairs": 0, 00:25:31.646 "io_qpairs": 1, 00:25:31.646 "current_admin_qpairs": 0, 00:25:31.646 "current_io_qpairs": 1, 00:25:31.646 "pending_bdev_io": 0, 00:25:31.646 "completed_nvme_io": 29368, 00:25:31.646 "transports": [ 00:25:31.646 { 00:25:31.646 "trtype": "TCP" 00:25:31.646 } 00:25:31.646 ] 00:25:31.646 }, 00:25:31.646 { 00:25:31.646 "name": "nvmf_tgt_poll_group_002", 00:25:31.646 "admin_qpairs": 0, 00:25:31.646 "io_qpairs": 1, 00:25:31.646 "current_admin_qpairs": 0, 00:25:31.646 "current_io_qpairs": 1, 00:25:31.646 "pending_bdev_io": 0, 00:25:31.646 "completed_nvme_io": 21131, 00:25:31.646 "transports": [ 00:25:31.646 { 00:25:31.646 "trtype": "TCP" 00:25:31.646 } 00:25:31.646 ] 00:25:31.646 }, 00:25:31.646 { 00:25:31.646 "name": "nvmf_tgt_poll_group_003", 00:25:31.646 "admin_qpairs": 0, 00:25:31.646 "io_qpairs": 1, 00:25:31.646 "current_admin_qpairs": 0, 00:25:31.646 "current_io_qpairs": 1, 00:25:31.646 "pending_bdev_io": 0, 00:25:31.646 "completed_nvme_io": 21127, 00:25:31.646 "transports": [ 00:25:31.646 { 00:25:31.646 "trtype": "TCP" 00:25:31.646 } 00:25:31.646 ] 00:25:31.646 } 00:25:31.646 ] 00:25:31.646 }' 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:31.646 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:25:31.647 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:25:31.647 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:25:31.647 20:38:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3150193 00:25:39.787 Initializing NVMe Controllers 00:25:39.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:39.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:39.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:39.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:39.787 Initialization complete. Launching workers. 00:25:39.787 ======================================================== 00:25:39.787 Latency(us) 00:25:39.787 Device Information : IOPS MiB/s Average min max 00:25:39.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11320.20 44.22 5654.57 1465.56 9068.44 00:25:39.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 16379.50 63.98 3907.39 1244.44 7917.50 00:25:39.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13485.80 52.68 4745.43 1271.33 11767.89 00:25:39.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11367.90 44.41 5629.93 1562.84 11336.56 00:25:39.787 ======================================================== 00:25:39.787 Total : 52553.40 205.29 4871.40 1244.44 11767.89 00:25:39.787 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:39.787 rmmod nvme_tcp 00:25:39.787 rmmod nvme_fabrics 00:25:39.787 rmmod nvme_keyring 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3150070 ']' 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3150070 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3150070 ']' 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3150070 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3150070 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3150070' 00:25:39.787 killing process with pid 3150070 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3150070 00:25:39.787 [2024-05-13 20:38:55.726284] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:39.787 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3150070 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.046 20:38:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.587 20:38:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.587 20:38:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:25:42.587 20:38:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:43.528 20:38:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:45.438 20:39:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:50.723 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:50.723 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.723 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:50.724 Found net devices under 0000:31:00.0: cvl_0_0 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:50.724 Found net devices under 0000:31:00.1: cvl_0_1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:50.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:25:50.724 00:25:50.724 --- 10.0.0.2 ping statistics --- 00:25:50.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.724 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:25:50.724 00:25:50.724 --- 10.0.0.1 ping statistics --- 00:25:50.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.724 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:50.724 net.core.busy_poll = 1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:50.724 net.core.busy_read = 1 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:50.724 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:50.985 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:50.986 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:50.986 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:50.986 20:39:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:50.986 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.986 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:50.986 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3154958 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3154958 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3154958 ']' 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.247 20:39:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:51.247 [2024-05-13 20:39:06.985729] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:25:51.248 [2024-05-13 20:39:06.985793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.248 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.248 [2024-05-13 20:39:07.070836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:51.248 [2024-05-13 20:39:07.141819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.248 [2024-05-13 20:39:07.141861] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.248 [2024-05-13 20:39:07.141869] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.248 [2024-05-13 20:39:07.141875] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.248 [2024-05-13 20:39:07.141881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.248 [2024-05-13 20:39:07.142025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.248 [2024-05-13 20:39:07.142118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.248 [2024-05-13 20:39:07.142210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.248 [2024-05-13 20:39:07.142213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.189 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 [2024-05-13 20:39:07.940617] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 Malloc1 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.190 20:39:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 [2024-05-13 20:39:07.999757] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:52.190 [2024-05-13 20:39:07.999996] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.190 20:39:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.190 20:39:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3155269 00:25:52.190 20:39:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:25:52.190 20:39:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:52.190 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:25:54.149 "tick_rate": 2400000000, 00:25:54.149 "poll_groups": [ 00:25:54.149 { 00:25:54.149 "name": "nvmf_tgt_poll_group_000", 00:25:54.149 "admin_qpairs": 1, 00:25:54.149 "io_qpairs": 3, 00:25:54.149 "current_admin_qpairs": 1, 00:25:54.149 "current_io_qpairs": 3, 00:25:54.149 "pending_bdev_io": 0, 00:25:54.149 "completed_nvme_io": 30647, 00:25:54.149 "transports": [ 00:25:54.149 { 00:25:54.149 "trtype": "TCP" 00:25:54.149 } 00:25:54.149 ] 00:25:54.149 }, 00:25:54.149 { 00:25:54.149 "name": "nvmf_tgt_poll_group_001", 00:25:54.149 "admin_qpairs": 0, 00:25:54.149 "io_qpairs": 1, 00:25:54.149 "current_admin_qpairs": 0, 00:25:54.149 "current_io_qpairs": 1, 00:25:54.149 "pending_bdev_io": 0, 00:25:54.149 "completed_nvme_io": 39699, 00:25:54.149 "transports": [ 00:25:54.149 { 00:25:54.149 "trtype": "TCP" 00:25:54.149 } 00:25:54.149 ] 00:25:54.149 }, 00:25:54.149 { 00:25:54.149 "name": "nvmf_tgt_poll_group_002", 00:25:54.149 "admin_qpairs": 0, 00:25:54.149 "io_qpairs": 0, 00:25:54.149 "current_admin_qpairs": 0, 00:25:54.149 "current_io_qpairs": 0, 00:25:54.149 "pending_bdev_io": 0, 00:25:54.149 "completed_nvme_io": 0, 00:25:54.149 "transports": [ 00:25:54.149 { 00:25:54.149 "trtype": "TCP" 00:25:54.149 } 00:25:54.149 ] 00:25:54.149 }, 00:25:54.149 { 00:25:54.149 "name": "nvmf_tgt_poll_group_003", 00:25:54.149 "admin_qpairs": 0, 00:25:54.149 "io_qpairs": 0, 00:25:54.149 "current_admin_qpairs": 0, 00:25:54.149 "current_io_qpairs": 0, 00:25:54.149 "pending_bdev_io": 0, 00:25:54.149 "completed_nvme_io": 0, 00:25:54.149 "transports": [ 00:25:54.149 { 00:25:54.149 "trtype": "TCP" 00:25:54.149 } 00:25:54.149 ] 00:25:54.149 } 00:25:54.149 ] 00:25:54.149 }' 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:25:54.149 20:39:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3155269 00:26:02.329 Initializing NVMe Controllers 00:26:02.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:02.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:02.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:02.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:02.329 Initialization complete. Launching workers. 00:26:02.329 ======================================================== 00:26:02.329 Latency(us) 00:26:02.329 Device Information : IOPS MiB/s Average min max 00:26:02.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7215.60 28.19 8874.46 1344.05 59618.64 00:26:02.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6503.80 25.41 9840.16 1386.68 57027.45 00:26:02.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 20404.30 79.70 3142.59 1200.30 44417.92 00:26:02.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7035.70 27.48 9095.89 1479.54 58533.35 00:26:02.329 ======================================================== 00:26:02.329 Total : 41159.40 160.78 6223.40 1200.30 59618.64 00:26:02.329 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:02.329 rmmod nvme_tcp 00:26:02.329 rmmod nvme_fabrics 00:26:02.329 rmmod nvme_keyring 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3154958 ']' 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3154958 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3154958 ']' 00:26:02.329 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3154958 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3154958 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3154958' 00:26:02.590 killing process with pid 3154958 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3154958 00:26:02.590 [2024-05-13 20:39:18.328203] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3154958 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.590 20:39:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.135 20:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:05.135 20:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:05.135 00:26:05.135 real 0m52.702s 00:26:05.135 user 2m49.911s 00:26:05.135 sys 0m10.789s 00:26:05.135 20:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:05.135 20:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.135 ************************************ 00:26:05.135 END TEST nvmf_perf_adq 00:26:05.135 ************************************ 00:26:05.135 20:39:20 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:05.135 20:39:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:05.135 20:39:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:05.135 20:39:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.135 ************************************ 00:26:05.135 START TEST nvmf_shutdown 00:26:05.135 ************************************ 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:05.135 * Looking for test storage... 00:26:05.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:05.135 ************************************ 00:26:05.135 START TEST nvmf_shutdown_tc1 00:26:05.135 ************************************ 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:05.135 20:39:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:13.275 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:13.275 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:13.275 Found net devices under 0000:31:00.0: cvl_0_0 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.275 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:13.276 Found net devices under 0000:31:00.1: cvl_0_1 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.748 ms 00:26:13.276 00:26:13.276 --- 10.0.0.2 ping statistics --- 00:26:13.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.276 rtt min/avg/max/mdev = 0.748/0.748/0.748/0.000 ms 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:26:13.276 00:26:13.276 --- 10.0.0.1 ping statistics --- 00:26:13.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.276 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3162292 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3162292 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3162292 ']' 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:13.276 20:39:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:13.276 [2024-05-13 20:39:28.949226] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:13.276 [2024-05-13 20:39:28.949287] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.276 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.276 [2024-05-13 20:39:29.045921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.276 [2024-05-13 20:39:29.139903] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.276 [2024-05-13 20:39:29.139967] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.276 [2024-05-13 20:39:29.139975] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.276 [2024-05-13 20:39:29.139982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.276 [2024-05-13 20:39:29.139988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.276 [2024-05-13 20:39:29.140124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.276 [2024-05-13 20:39:29.140277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.276 [2024-05-13 20:39:29.140768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.276 [2024-05-13 20:39:29.140770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.847 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:13.847 [2024-05-13 20:39:29.782776] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.108 20:39:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:14.108 Malloc1 00:26:14.108 [2024-05-13 20:39:29.886041] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:14.108 [2024-05-13 20:39:29.886297] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.108 Malloc2 00:26:14.108 Malloc3 00:26:14.108 Malloc4 00:26:14.108 Malloc5 00:26:14.369 Malloc6 00:26:14.369 Malloc7 00:26:14.369 Malloc8 00:26:14.369 Malloc9 00:26:14.369 Malloc10 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3162674 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3162674 /var/tmp/bdevperf.sock 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3162674 ']' 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.369 { 00:26:14.369 "params": { 00:26:14.369 "name": "Nvme$subsystem", 00:26:14.369 "trtype": "$TEST_TRANSPORT", 00:26:14.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.369 "adrfam": "ipv4", 00:26:14.369 "trsvcid": "$NVMF_PORT", 00:26:14.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.369 "hdgst": ${hdgst:-false}, 00:26:14.369 "ddgst": ${ddgst:-false} 00:26:14.369 }, 00:26:14.369 "method": "bdev_nvme_attach_controller" 00:26:14.369 } 00:26:14.369 EOF 00:26:14.369 )") 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.369 { 00:26:14.369 "params": { 00:26:14.369 "name": "Nvme$subsystem", 00:26:14.369 "trtype": "$TEST_TRANSPORT", 00:26:14.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.369 "adrfam": "ipv4", 00:26:14.369 "trsvcid": "$NVMF_PORT", 00:26:14.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.369 "hdgst": ${hdgst:-false}, 00:26:14.369 "ddgst": ${ddgst:-false} 00:26:14.369 }, 00:26:14.369 "method": "bdev_nvme_attach_controller" 00:26:14.369 } 00:26:14.369 EOF 00:26:14.369 )") 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.369 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.369 { 00:26:14.369 "params": { 00:26:14.369 "name": "Nvme$subsystem", 00:26:14.369 "trtype": "$TEST_TRANSPORT", 00:26:14.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.369 "adrfam": "ipv4", 00:26:14.369 "trsvcid": "$NVMF_PORT", 00:26:14.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.369 "hdgst": ${hdgst:-false}, 00:26:14.369 "ddgst": ${ddgst:-false} 00:26:14.369 }, 00:26:14.369 "method": "bdev_nvme_attach_controller" 00:26:14.369 } 00:26:14.369 EOF 00:26:14.369 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 [2024-05-13 20:39:30.338166] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:14.630 [2024-05-13 20:39:30.338218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.630 { 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme$subsystem", 00:26:14.630 "trtype": "$TEST_TRANSPORT", 00:26:14.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "$NVMF_PORT", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.630 "hdgst": ${hdgst:-false}, 00:26:14.630 "ddgst": ${ddgst:-false} 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 } 00:26:14.630 EOF 00:26:14.630 )") 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:14.630 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:14.630 20:39:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme1", 00:26:14.630 "trtype": "tcp", 00:26:14.630 "traddr": "10.0.0.2", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "4420", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.630 "hdgst": false, 00:26:14.630 "ddgst": false 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 },{ 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme2", 00:26:14.630 "trtype": "tcp", 00:26:14.630 "traddr": "10.0.0.2", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "4420", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:14.630 "hdgst": false, 00:26:14.630 "ddgst": false 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 },{ 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme3", 00:26:14.630 "trtype": "tcp", 00:26:14.630 "traddr": "10.0.0.2", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "4420", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:14.630 "hdgst": false, 00:26:14.630 "ddgst": false 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 },{ 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme4", 00:26:14.630 "trtype": "tcp", 00:26:14.630 "traddr": "10.0.0.2", 00:26:14.630 "adrfam": "ipv4", 00:26:14.630 "trsvcid": "4420", 00:26:14.630 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:14.630 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:14.630 "hdgst": false, 00:26:14.630 "ddgst": false 00:26:14.630 }, 00:26:14.630 "method": "bdev_nvme_attach_controller" 00:26:14.630 },{ 00:26:14.630 "params": { 00:26:14.630 "name": "Nvme5", 00:26:14.630 "trtype": "tcp", 00:26:14.630 "traddr": "10.0.0.2", 00:26:14.631 "adrfam": "ipv4", 00:26:14.631 "trsvcid": "4420", 00:26:14.631 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:14.631 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:14.631 "hdgst": false, 00:26:14.631 "ddgst": false 00:26:14.631 }, 00:26:14.631 "method": "bdev_nvme_attach_controller" 00:26:14.631 },{ 00:26:14.631 "params": { 00:26:14.631 "name": "Nvme6", 00:26:14.631 "trtype": "tcp", 00:26:14.631 "traddr": "10.0.0.2", 00:26:14.631 "adrfam": "ipv4", 00:26:14.631 "trsvcid": "4420", 00:26:14.631 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:14.631 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:14.631 "hdgst": false, 00:26:14.631 "ddgst": false 00:26:14.631 }, 00:26:14.631 "method": "bdev_nvme_attach_controller" 00:26:14.631 },{ 00:26:14.631 "params": { 00:26:14.631 "name": "Nvme7", 00:26:14.631 "trtype": "tcp", 00:26:14.631 "traddr": "10.0.0.2", 00:26:14.631 "adrfam": "ipv4", 00:26:14.631 "trsvcid": "4420", 00:26:14.631 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:14.631 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:14.631 "hdgst": false, 00:26:14.631 "ddgst": false 00:26:14.631 }, 00:26:14.631 "method": "bdev_nvme_attach_controller" 00:26:14.631 },{ 00:26:14.631 "params": { 00:26:14.631 "name": "Nvme8", 00:26:14.631 "trtype": "tcp", 00:26:14.631 "traddr": "10.0.0.2", 00:26:14.631 "adrfam": "ipv4", 00:26:14.631 "trsvcid": "4420", 00:26:14.631 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:14.631 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:14.631 "hdgst": false, 00:26:14.631 "ddgst": false 00:26:14.631 }, 00:26:14.631 "method": "bdev_nvme_attach_controller" 00:26:14.631 },{ 00:26:14.631 "params": { 00:26:14.631 "name": "Nvme9", 00:26:14.631 "trtype": "tcp", 00:26:14.631 "traddr": "10.0.0.2", 00:26:14.631 "adrfam": "ipv4", 00:26:14.631 "trsvcid": "4420", 00:26:14.631 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:14.631 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:14.631 "hdgst": false, 00:26:14.631 "ddgst": false 00:26:14.631 }, 00:26:14.631 "method": "bdev_nvme_attach_controller" 00:26:14.631 },{ 00:26:14.631 "params": { 00:26:14.631 "name": "Nvme10", 00:26:14.631 "trtype": "tcp", 00:26:14.631 "traddr": "10.0.0.2", 00:26:14.631 "adrfam": "ipv4", 00:26:14.631 "trsvcid": "4420", 00:26:14.631 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:14.631 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:14.631 "hdgst": false, 00:26:14.631 "ddgst": false 00:26:14.631 }, 00:26:14.631 "method": "bdev_nvme_attach_controller" 00:26:14.631 }' 00:26:14.631 [2024-05-13 20:39:30.405390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.631 [2024-05-13 20:39:30.470163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3162674 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:16.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3162674 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:16.013 20:39:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3162292 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.954 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.954 { 00:26:16.954 "params": { 00:26:16.954 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 [2024-05-13 20:39:32.795962] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:16.955 [2024-05-13 20:39:32.796027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163043 ] 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.955 { 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme$subsystem", 00:26:16.955 "trtype": "$TEST_TRANSPORT", 00:26:16.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "$NVMF_PORT", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.955 "hdgst": ${hdgst:-false}, 00:26:16.955 "ddgst": ${ddgst:-false} 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 } 00:26:16.955 EOF 00:26:16.955 )") 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:16.955 20:39:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme1", 00:26:16.955 "trtype": "tcp", 00:26:16.955 "traddr": "10.0.0.2", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "4420", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.955 "hdgst": false, 00:26:16.955 "ddgst": false 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 },{ 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme2", 00:26:16.955 "trtype": "tcp", 00:26:16.955 "traddr": "10.0.0.2", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "4420", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:16.955 "hdgst": false, 00:26:16.955 "ddgst": false 00:26:16.955 }, 00:26:16.955 "method": "bdev_nvme_attach_controller" 00:26:16.955 },{ 00:26:16.955 "params": { 00:26:16.955 "name": "Nvme3", 00:26:16.955 "trtype": "tcp", 00:26:16.955 "traddr": "10.0.0.2", 00:26:16.955 "adrfam": "ipv4", 00:26:16.955 "trsvcid": "4420", 00:26:16.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:16.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme4", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme5", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme6", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme7", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme8", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme9", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 },{ 00:26:16.956 "params": { 00:26:16.956 "name": "Nvme10", 00:26:16.956 "trtype": "tcp", 00:26:16.956 "traddr": "10.0.0.2", 00:26:16.956 "adrfam": "ipv4", 00:26:16.956 "trsvcid": "4420", 00:26:16.956 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:16.956 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:16.956 "hdgst": false, 00:26:16.956 "ddgst": false 00:26:16.956 }, 00:26:16.956 "method": "bdev_nvme_attach_controller" 00:26:16.956 }' 00:26:16.956 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.956 [2024-05-13 20:39:32.870748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.217 [2024-05-13 20:39:32.935222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.602 Running I/O for 1 seconds... 00:26:19.544 00:26:19.544 Latency(us) 00:26:19.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.544 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme1n1 : 1.18 217.38 13.59 0.00 0.00 291482.67 19770.03 249910.61 00:26:19.544 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme2n1 : 1.17 218.18 13.64 0.00 0.00 283722.24 15510.19 256901.12 00:26:19.544 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme3n1 : 1.17 272.50 17.03 0.00 0.00 223179.61 16602.45 241172.48 00:26:19.544 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme4n1 : 1.22 261.86 16.37 0.00 0.00 222512.13 20097.71 242920.11 00:26:19.544 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme5n1 : 1.17 218.91 13.68 0.00 0.00 269534.08 17694.72 251658.24 00:26:19.544 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme6n1 : 1.16 224.53 14.03 0.00 0.00 256592.69 8792.75 246415.36 00:26:19.544 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme7n1 : 1.15 227.10 14.19 0.00 0.00 244281.67 19551.57 239424.85 00:26:19.544 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme8n1 : 1.19 269.47 16.84 0.00 0.00 207881.56 19988.48 241172.48 00:26:19.544 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme9n1 : 1.19 268.77 16.80 0.00 0.00 204726.27 15182.51 298844.16 00:26:19.544 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:19.544 Verification LBA range: start 0x0 length 0x400 00:26:19.544 Nvme10n1 : 1.18 216.10 13.51 0.00 0.00 249614.08 23920.64 284863.15 00:26:19.544 =================================================================================================================== 00:26:19.544 Total : 2394.80 149.68 0.00 0.00 242582.61 8792.75 298844.16 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.804 rmmod nvme_tcp 00:26:19.804 rmmod nvme_fabrics 00:26:19.804 rmmod nvme_keyring 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3162292 ']' 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3162292 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3162292 ']' 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3162292 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3162292 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3162292' 00:26:19.804 killing process with pid 3162292 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3162292 00:26:19.804 [2024-05-13 20:39:35.724954] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:19.804 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3162292 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.065 20:39:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:22.613 00:26:22.613 real 0m17.203s 00:26:22.613 user 0m33.473s 00:26:22.613 sys 0m7.028s 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:22.613 ************************************ 00:26:22.613 END TEST nvmf_shutdown_tc1 00:26:22.613 ************************************ 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:22.613 ************************************ 00:26:22.613 START TEST nvmf_shutdown_tc2 00:26:22.613 ************************************ 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.613 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:22.614 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:22.614 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:22.614 Found net devices under 0000:31:00.0: cvl_0_0 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:22.614 Found net devices under 0000:31:00.1: cvl_0_1 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:26:22.614 00:26:22.614 --- 10.0.0.2 ping statistics --- 00:26:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.614 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:26:22.614 00:26:22.614 --- 10.0.0.1 ping statistics --- 00:26:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.614 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:26:22.614 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3164333 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3164333 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3164333 ']' 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:22.615 20:39:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.876 [2024-05-13 20:39:38.560506] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:22.876 [2024-05-13 20:39:38.560589] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.876 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.876 [2024-05-13 20:39:38.649193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.876 [2024-05-13 20:39:38.703821] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.876 [2024-05-13 20:39:38.703853] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.876 [2024-05-13 20:39:38.703858] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.876 [2024-05-13 20:39:38.703863] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.876 [2024-05-13 20:39:38.703868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.876 [2024-05-13 20:39:38.703976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.876 [2024-05-13 20:39:38.704121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.876 [2024-05-13 20:39:38.704284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.876 [2024-05-13 20:39:38.704286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.449 [2024-05-13 20:39:39.366648] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.449 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.709 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.709 Malloc1 00:26:23.709 [2024-05-13 20:39:39.465605] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:23.709 [2024-05-13 20:39:39.465791] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.709 Malloc2 00:26:23.709 Malloc3 00:26:23.709 Malloc4 00:26:23.709 Malloc5 00:26:23.709 Malloc6 00:26:23.970 Malloc7 00:26:23.970 Malloc8 00:26:23.970 Malloc9 00:26:23.970 Malloc10 00:26:23.970 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.970 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:23.970 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.970 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.970 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3164548 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3164548 /var/tmp/bdevperf.sock 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3164548 ']' 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:23.971 [2024-05-13 20:39:39.904568] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:23.971 [2024-05-13 20:39:39.904619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164548 ] 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.971 { 00:26:23.971 "params": { 00:26:23.971 "name": "Nvme$subsystem", 00:26:23.971 "trtype": "$TEST_TRANSPORT", 00:26:23.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.971 "adrfam": "ipv4", 00:26:23.971 "trsvcid": "$NVMF_PORT", 00:26:23.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.971 "hdgst": ${hdgst:-false}, 00:26:23.971 "ddgst": ${ddgst:-false} 00:26:23.971 }, 00:26:23.971 "method": "bdev_nvme_attach_controller" 00:26:23.971 } 00:26:23.971 EOF 00:26:23.971 )") 00:26:23.971 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:24.233 { 00:26:24.233 "params": { 00:26:24.233 "name": "Nvme$subsystem", 00:26:24.233 "trtype": "$TEST_TRANSPORT", 00:26:24.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.233 "adrfam": "ipv4", 00:26:24.233 "trsvcid": "$NVMF_PORT", 00:26:24.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.233 "hdgst": ${hdgst:-false}, 00:26:24.233 "ddgst": ${ddgst:-false} 00:26:24.233 }, 00:26:24.233 "method": "bdev_nvme_attach_controller" 00:26:24.233 } 00:26:24.233 EOF 00:26:24.233 )") 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:24.233 { 00:26:24.233 "params": { 00:26:24.233 "name": "Nvme$subsystem", 00:26:24.233 "trtype": "$TEST_TRANSPORT", 00:26:24.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.233 "adrfam": "ipv4", 00:26:24.233 "trsvcid": "$NVMF_PORT", 00:26:24.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.233 "hdgst": ${hdgst:-false}, 00:26:24.233 "ddgst": ${ddgst:-false} 00:26:24.233 }, 00:26:24.233 "method": "bdev_nvme_attach_controller" 00:26:24.233 } 00:26:24.233 EOF 00:26:24.233 )") 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:24.233 { 00:26:24.233 "params": { 00:26:24.233 "name": "Nvme$subsystem", 00:26:24.233 "trtype": "$TEST_TRANSPORT", 00:26:24.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.233 "adrfam": "ipv4", 00:26:24.233 "trsvcid": "$NVMF_PORT", 00:26:24.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.233 "hdgst": ${hdgst:-false}, 00:26:24.233 "ddgst": ${ddgst:-false} 00:26:24.233 }, 00:26:24.233 "method": "bdev_nvme_attach_controller" 00:26:24.233 } 00:26:24.233 EOF 00:26:24.233 )") 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:24.233 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:24.233 20:39:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:24.233 "params": { 00:26:24.233 "name": "Nvme1", 00:26:24.233 "trtype": "tcp", 00:26:24.233 "traddr": "10.0.0.2", 00:26:24.233 "adrfam": "ipv4", 00:26:24.233 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme2", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme3", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme4", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme5", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme6", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme7", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme8", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme9", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 },{ 00:26:24.234 "params": { 00:26:24.234 "name": "Nvme10", 00:26:24.234 "trtype": "tcp", 00:26:24.234 "traddr": "10.0.0.2", 00:26:24.234 "adrfam": "ipv4", 00:26:24.234 "trsvcid": "4420", 00:26:24.234 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:24.234 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:24.234 "hdgst": false, 00:26:24.234 "ddgst": false 00:26:24.234 }, 00:26:24.234 "method": "bdev_nvme_attach_controller" 00:26:24.234 }' 00:26:24.234 [2024-05-13 20:39:39.972102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.234 [2024-05-13 20:39:40.039625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.147 Running I/O for 10 seconds... 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:26.147 20:39:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:26.414 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3164548 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3164548 ']' 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3164548 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3164548 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3164548' 00:26:26.685 killing process with pid 3164548 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3164548 00:26:26.685 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3164548 00:26:26.685 Received shutdown signal, test time was about 0.948626 seconds 00:26:26.685 00:26:26.685 Latency(us) 00:26:26.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.685 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme1n1 : 0.91 211.51 13.22 0.00 0.00 298840.46 18786.99 239424.85 00:26:26.685 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme2n1 : 0.93 206.78 12.92 0.00 0.00 299318.04 20206.93 256901.12 00:26:26.685 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme3n1 : 0.94 277.47 17.34 0.00 0.00 217892.08 3932.16 249910.61 00:26:26.685 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme4n1 : 0.95 270.12 16.88 0.00 0.00 219863.68 22391.47 258648.75 00:26:26.685 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme5n1 : 0.92 208.98 13.06 0.00 0.00 276668.87 20206.93 251658.24 00:26:26.685 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme6n1 : 0.92 207.65 12.98 0.00 0.00 272818.63 24466.77 249910.61 00:26:26.685 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme7n1 : 0.93 274.89 17.18 0.00 0.00 201549.65 15073.28 249910.61 00:26:26.685 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme8n1 : 0.94 272.07 17.00 0.00 0.00 199191.25 19333.12 248162.99 00:26:26.685 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme9n1 : 0.92 234.35 14.65 0.00 0.00 220774.15 13216.43 251658.24 00:26:26.685 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.685 Verification LBA range: start 0x0 length 0x400 00:26:26.685 Nvme10n1 : 0.94 205.14 12.82 0.00 0.00 251468.80 25668.27 279620.27 00:26:26.685 =================================================================================================================== 00:26:26.685 Total : 2368.97 148.06 0.00 0.00 241317.84 3932.16 279620.27 00:26:26.946 20:39:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3164333 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:27.891 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:27.891 rmmod nvme_tcp 00:26:27.891 rmmod nvme_fabrics 00:26:27.891 rmmod nvme_keyring 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3164333 ']' 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3164333 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3164333 ']' 00:26:28.153 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3164333 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3164333 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3164333' 00:26:28.154 killing process with pid 3164333 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3164333 00:26:28.154 [2024-05-13 20:39:43.905244] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:28.154 20:39:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3164333 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.416 20:39:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:30.334 00:26:30.334 real 0m8.104s 00:26:30.334 user 0m24.607s 00:26:30.334 sys 0m1.294s 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.334 ************************************ 00:26:30.334 END TEST nvmf_shutdown_tc2 00:26:30.334 ************************************ 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:30.334 20:39:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:30.596 ************************************ 00:26:30.596 START TEST nvmf_shutdown_tc3 00:26:30.596 ************************************ 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.596 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:30.597 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:30.597 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:30.597 Found net devices under 0000:31:00.0: cvl_0_0 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:30.597 Found net devices under 0000:31:00.1: cvl_0_1 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.597 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:26:30.860 00:26:30.860 --- 10.0.0.2 ping statistics --- 00:26:30.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.860 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:26:30.860 00:26:30.860 --- 10.0.0.1 ping statistics --- 00:26:30.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.860 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3166005 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3166005 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3166005 ']' 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:30.860 20:39:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:30.860 [2024-05-13 20:39:46.779479] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:30.860 [2024-05-13 20:39:46.779544] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.122 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.122 [2024-05-13 20:39:46.872715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.122 [2024-05-13 20:39:46.934508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.122 [2024-05-13 20:39:46.934541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.122 [2024-05-13 20:39:46.934546] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.122 [2024-05-13 20:39:46.934551] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.122 [2024-05-13 20:39:46.934555] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.122 [2024-05-13 20:39:46.934671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.122 [2024-05-13 20:39:46.934816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.122 [2024-05-13 20:39:46.934979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.122 [2024-05-13 20:39:46.934982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:31.694 [2024-05-13 20:39:47.599657] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:31.694 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.695 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.957 20:39:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:31.957 Malloc1 00:26:31.957 [2024-05-13 20:39:47.698597] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:31.957 [2024-05-13 20:39:47.698792] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.957 Malloc2 00:26:31.957 Malloc3 00:26:31.957 Malloc4 00:26:31.957 Malloc5 00:26:31.957 Malloc6 00:26:32.222 Malloc7 00:26:32.222 Malloc8 00:26:32.222 Malloc9 00:26:32.222 Malloc10 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3166382 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3166382 /var/tmp/bdevperf.sock 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3166382 ']' 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.222 { 00:26:32.222 "params": { 00:26:32.222 "name": "Nvme$subsystem", 00:26:32.222 "trtype": "$TEST_TRANSPORT", 00:26:32.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.222 "adrfam": "ipv4", 00:26:32.222 "trsvcid": "$NVMF_PORT", 00:26:32.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.222 "hdgst": ${hdgst:-false}, 00:26:32.222 "ddgst": ${ddgst:-false} 00:26:32.222 }, 00:26:32.222 "method": "bdev_nvme_attach_controller" 00:26:32.222 } 00:26:32.222 EOF 00:26:32.222 )") 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.222 { 00:26:32.222 "params": { 00:26:32.222 "name": "Nvme$subsystem", 00:26:32.222 "trtype": "$TEST_TRANSPORT", 00:26:32.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.222 "adrfam": "ipv4", 00:26:32.222 "trsvcid": "$NVMF_PORT", 00:26:32.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.222 "hdgst": ${hdgst:-false}, 00:26:32.222 "ddgst": ${ddgst:-false} 00:26:32.222 }, 00:26:32.222 "method": "bdev_nvme_attach_controller" 00:26:32.222 } 00:26:32.222 EOF 00:26:32.222 )") 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.222 { 00:26:32.222 "params": { 00:26:32.222 "name": "Nvme$subsystem", 00:26:32.222 "trtype": "$TEST_TRANSPORT", 00:26:32.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.222 "adrfam": "ipv4", 00:26:32.222 "trsvcid": "$NVMF_PORT", 00:26:32.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.222 "hdgst": ${hdgst:-false}, 00:26:32.222 "ddgst": ${ddgst:-false} 00:26:32.222 }, 00:26:32.222 "method": "bdev_nvme_attach_controller" 00:26:32.222 } 00:26:32.222 EOF 00:26:32.222 )") 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.222 { 00:26:32.222 "params": { 00:26:32.222 "name": "Nvme$subsystem", 00:26:32.222 "trtype": "$TEST_TRANSPORT", 00:26:32.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.222 "adrfam": "ipv4", 00:26:32.222 "trsvcid": "$NVMF_PORT", 00:26:32.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.222 "hdgst": ${hdgst:-false}, 00:26:32.222 "ddgst": ${ddgst:-false} 00:26:32.222 }, 00:26:32.222 "method": "bdev_nvme_attach_controller" 00:26:32.222 } 00:26:32.222 EOF 00:26:32.222 )") 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.222 { 00:26:32.222 "params": { 00:26:32.222 "name": "Nvme$subsystem", 00:26:32.222 "trtype": "$TEST_TRANSPORT", 00:26:32.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.222 "adrfam": "ipv4", 00:26:32.222 "trsvcid": "$NVMF_PORT", 00:26:32.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.222 "hdgst": ${hdgst:-false}, 00:26:32.222 "ddgst": ${ddgst:-false} 00:26:32.222 }, 00:26:32.222 "method": "bdev_nvme_attach_controller" 00:26:32.222 } 00:26:32.222 EOF 00:26:32.222 )") 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.222 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.222 { 00:26:32.222 "params": { 00:26:32.222 "name": "Nvme$subsystem", 00:26:32.222 "trtype": "$TEST_TRANSPORT", 00:26:32.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.222 "adrfam": "ipv4", 00:26:32.222 "trsvcid": "$NVMF_PORT", 00:26:32.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.223 "hdgst": ${hdgst:-false}, 00:26:32.223 "ddgst": ${ddgst:-false} 00:26:32.223 }, 00:26:32.223 "method": "bdev_nvme_attach_controller" 00:26:32.223 } 00:26:32.223 EOF 00:26:32.223 )") 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.223 [2024-05-13 20:39:48.136901] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:32.223 [2024-05-13 20:39:48.136952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166382 ] 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.223 { 00:26:32.223 "params": { 00:26:32.223 "name": "Nvme$subsystem", 00:26:32.223 "trtype": "$TEST_TRANSPORT", 00:26:32.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.223 "adrfam": "ipv4", 00:26:32.223 "trsvcid": "$NVMF_PORT", 00:26:32.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.223 "hdgst": ${hdgst:-false}, 00:26:32.223 "ddgst": ${ddgst:-false} 00:26:32.223 }, 00:26:32.223 "method": "bdev_nvme_attach_controller" 00:26:32.223 } 00:26:32.223 EOF 00:26:32.223 )") 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.223 { 00:26:32.223 "params": { 00:26:32.223 "name": "Nvme$subsystem", 00:26:32.223 "trtype": "$TEST_TRANSPORT", 00:26:32.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.223 "adrfam": "ipv4", 00:26:32.223 "trsvcid": "$NVMF_PORT", 00:26:32.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.223 "hdgst": ${hdgst:-false}, 00:26:32.223 "ddgst": ${ddgst:-false} 00:26:32.223 }, 00:26:32.223 "method": "bdev_nvme_attach_controller" 00:26:32.223 } 00:26:32.223 EOF 00:26:32.223 )") 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.223 { 00:26:32.223 "params": { 00:26:32.223 "name": "Nvme$subsystem", 00:26:32.223 "trtype": "$TEST_TRANSPORT", 00:26:32.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.223 "adrfam": "ipv4", 00:26:32.223 "trsvcid": "$NVMF_PORT", 00:26:32.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.223 "hdgst": ${hdgst:-false}, 00:26:32.223 "ddgst": ${ddgst:-false} 00:26:32.223 }, 00:26:32.223 "method": "bdev_nvme_attach_controller" 00:26:32.223 } 00:26:32.223 EOF 00:26:32.223 )") 00:26:32.223 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.605 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.605 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.605 { 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme$subsystem", 00:26:32.605 "trtype": "$TEST_TRANSPORT", 00:26:32.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "$NVMF_PORT", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.605 "hdgst": ${hdgst:-false}, 00:26:32.605 "ddgst": ${ddgst:-false} 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 } 00:26:32.605 EOF 00:26:32.605 )") 00:26:32.605 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:32.605 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.605 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:32.605 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:32.605 20:39:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme1", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme2", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme3", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme4", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme5", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme6", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme7", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme8", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme9", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 },{ 00:26:32.605 "params": { 00:26:32.605 "name": "Nvme10", 00:26:32.605 "trtype": "tcp", 00:26:32.605 "traddr": "10.0.0.2", 00:26:32.605 "adrfam": "ipv4", 00:26:32.605 "trsvcid": "4420", 00:26:32.605 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:32.605 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:32.605 "hdgst": false, 00:26:32.605 "ddgst": false 00:26:32.605 }, 00:26:32.605 "method": "bdev_nvme_attach_controller" 00:26:32.605 }' 00:26:32.605 [2024-05-13 20:39:48.203972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.605 [2024-05-13 20:39:48.268755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.994 Running I/O for 10 seconds... 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:33.994 20:39:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:34.255 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3166005 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3166005 ']' 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3166005 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:34.517 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3166005 00:26:34.797 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:34.797 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:34.797 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3166005' 00:26:34.797 killing process with pid 3166005 00:26:34.797 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3166005 00:26:34.797 [2024-05-13 20:39:50.472534] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:34.797 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3166005 00:26:34.797 [2024-05-13 20:39:50.472961] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.472987] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.472998] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473003] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473008] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473018] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473022] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473027] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473031] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473036] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473040] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473045] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473049] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.797 [2024-05-13 20:39:50.473053] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473058] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473062] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473067] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473071] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473075] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473080] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473084] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473089] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473093] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473097] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473102] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473106] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473110] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473115] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473120] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473125] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473129] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473134] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473138] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473142] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473147] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473152] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473156] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473160] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473165] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473169] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473177] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473182] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473186] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473190] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473194] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473198] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473203] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473207] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473212] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473216] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473220] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473224] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473229] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473233] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473239] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.473244] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d0a0 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474012] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474037] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474044] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474051] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474058] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474064] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474071] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474078] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474084] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474091] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474097] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474103] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474110] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474116] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474123] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474129] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474136] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474142] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474149] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474155] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474162] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474168] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474175] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474182] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474188] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474195] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474205] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474212] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474218] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474225] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474232] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474238] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474245] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474251] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474257] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474264] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474270] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474276] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474283] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474289] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474296] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474302] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474308] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.798 [2024-05-13 20:39:50.474318] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474325] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474332] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474339] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474345] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474352] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474358] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474365] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474371] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474377] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474386] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474392] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474399] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474405] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474411] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474418] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474431] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474438] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474444] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ea00 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.474570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-05-13 20:39:50.474607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-05-13 20:39:50.474618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-05-13 20:39:50.474629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-05-13 20:39:50.474637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-05-13 20:39:50.474644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-05-13 20:39:50.474652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-05-13 20:39:50.474659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-05-13 20:39:50.474666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6cdb0 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475857] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475870] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475877] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475884] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475890] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475897] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475903] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475919] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475925] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475931] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475937] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475944] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475950] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475956] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475963] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475969] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475975] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475981] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475988] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.475994] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476001] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476007] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476014] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476020] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476027] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476033] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476039] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476045] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476052] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476058] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476064] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476070] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476077] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476085] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476091] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476098] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476105] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476111] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476118] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476124] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476131] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476137] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476143] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476150] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476156] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476163] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476169] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476176] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476182] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476188] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476194] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476201] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476207] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476213] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476219] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.799 [2024-05-13 20:39:50.476226] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.476232] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.476238] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.476244] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.476250] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.476257] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.476264] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d540 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.480687] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.800 [2024-05-13 20:39:50.481392] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.800 [2024-05-13 20:39:50.481432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481725] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.481746] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481754] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481759] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481764] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with [2024-05-13 20:39:50.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.800 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481772] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481777] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with [2024-05-13 20:39:50.481776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12the state(5) to be set 00:26:34.800 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481784] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.481789] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481796] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481801] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481812] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481817] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with [2024-05-13 20:39:50.481817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12the state(5) to be set 00:26:34.800 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481824] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481829] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481834] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481839] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481844] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with [2024-05-13 20:39:50.481843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.800 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481851] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:12[2024-05-13 20:39:50.481856] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481863] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.800 [2024-05-13 20:39:50.481868] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481873] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.800 [2024-05-13 20:39:50.481874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.800 [2024-05-13 20:39:50.481878] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.481883] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481891] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12[2024-05-13 20:39:50.481895] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481902] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.481907] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.481916] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.481922] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481929] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12[2024-05-13 20:39:50.481933] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481940] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.481946] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481951] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with [2024-05-13 20:39:50.481951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:12the state(5) to be set 00:26:34.801 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.481958] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.481963] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481968] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.481973] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.481981] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481987] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.481991] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.481995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.481997] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482003] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482007] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482012] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482017] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482022] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482027] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482032] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482037] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482042] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482047] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482052] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482057] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482062] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482068] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with [2024-05-13 20:39:50.482067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.801 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482074] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d9e0 is same with the state(5) to be set 00:26:34.801 [2024-05-13 20:39:50.482078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.801 [2024-05-13 20:39:50.482280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.801 [2024-05-13 20:39:50.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.802 [2024-05-13 20:39:50.482542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.802 [2024-05-13 20:39:50.482550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb32bf0 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.482606] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb32bf0 was disconnected and freed. reset controller. 00:26:34.802 [2024-05-13 20:39:50.482667] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.802 [2024-05-13 20:39:50.482866] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5de80 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483066] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483083] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483088] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483092] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483097] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483101] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483106] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483110] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483115] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483119] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483124] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483132] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483137] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483141] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483146] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483150] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483154] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483159] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483163] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483167] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483171] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483176] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483180] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483184] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483189] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483193] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483197] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483202] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483206] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.802 [2024-05-13 20:39:50.483210] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483215] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483219] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483223] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483228] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483232] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483237] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483241] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483245] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483252] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483256] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483261] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483265] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483269] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483274] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483278] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483282] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483286] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483290] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483295] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483299] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483306] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483318] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483326] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483334] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483341] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483349] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483358] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483365] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483373] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483380] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483388] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483392] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.483397] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e320 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484121] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484135] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484144] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484151] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484157] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484164] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484170] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484177] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484183] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484189] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484195] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484202] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484208] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484214] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484221] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484227] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484234] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484240] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484247] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484253] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484259] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484266] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484273] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484279] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484285] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484292] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484298] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484304] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484312] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484326] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484333] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484340] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484346] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484352] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484359] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484365] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484371] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484377] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484384] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484390] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484397] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484403] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484409] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484415] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484421] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484427] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484434] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484440] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484446] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484453] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484459] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484465] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484471] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.803 [2024-05-13 20:39:50.484477] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484483] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484490] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484497] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484504] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484511] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484517] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484523] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484529] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484536] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbb0 is same with the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.484802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.484983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.484993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 [2024-05-13 20:39:50.485393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.804 [2024-05-13 20:39:50.485402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.485398] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.804 the state(5) to be set 00:26:34.804 [2024-05-13 20:39:50.485413] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485419] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485425] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485430] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485435] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485440] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.805 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485447] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485452] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:12the state(5) to be set 00:26:34.805 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485459] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485463] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485469] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485473] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485479] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.805 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485486] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12[2024-05-13 20:39:50.485491] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485501] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485506] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485511] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485516] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.805 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485523] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12[2024-05-13 20:39:50.485528] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485535] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485540] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485545] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485550] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485556] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485561] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485565] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485570] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.805 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485578] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485583] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12the state(5) to be set 00:26:34.805 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485589] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485595] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485600] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485605] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485613] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485618] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12the state(5) to be set 00:26:34.805 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485625] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485630] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485636] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485640] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.485645] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485653] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485657] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485663] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:26:34.805 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.805 [2024-05-13 20:39:50.485670] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485675] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.805 [2024-05-13 20:39:50.485675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.805 [2024-05-13 20:39:50.485681] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485688] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485693] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:12the state(5) to be set 00:26:34.806 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485700] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485705] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485710] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485715] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-13 20:39:50.485720] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485728] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485733] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485738] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485743] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485748] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with [2024-05-13 20:39:50.485748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:12the state(5) to be set 00:26:34.806 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485756] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485761] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0050 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.485919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.485927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9930 is same with the state(5) to be set 00:26:34.806 [2024-05-13 20:39:50.485977] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc9930 was disconnected and freed. reset controller. 00:26:34.806 [2024-05-13 20:39:50.486068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.486079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.486091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.486098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.806 [2024-05-13 20:39:50.486107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.806 [2024-05-13 20:39:50.486114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486303] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a04f0 is same with the state(5) to be set 00:26:34.807 [2024-05-13 20:39:50.486310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486626] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.807 [2024-05-13 20:39:50.486657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486704] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.807 [2024-05-13 20:39:50.486750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486800] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.807 [2024-05-13 20:39:50.486852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.486899] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.807 [2024-05-13 20:39:50.486947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.807 [2024-05-13 20:39:50.486994] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.807 [2024-05-13 20:39:50.487054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.807 [2024-05-13 20:39:50.487098] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.487193] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.487287] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.487401] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.487499] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.487603] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.487697] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.487793] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.487890] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.487987] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.488086] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.488186] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.488283] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.488384] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.488484] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.488586] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.488687] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.488783] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.488878] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.488932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.488978] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.489074] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.489174] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.489272] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.489378] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.489478] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.489581] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.808 [2024-05-13 20:39:50.489677] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.808 [2024-05-13 20:39:50.489730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.808 [2024-05-13 20:39:50.489777] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.489829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.489879] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.489928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.489981] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.490082] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.490179] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.490290] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.490448] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.490549] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.490652] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.490750] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.490846] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490897] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490944] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.490992] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491052] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491100] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491158] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491208] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491258] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491307] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491368] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491416] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491515] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491570] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491619] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491670] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491719] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491769] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491819] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.491875] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e690 is same with the state(5) to be set 00:26:34.809 [2024-05-13 20:39:50.505870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.505909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.505920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.505928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.505938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.505946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.505955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.505962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.505972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.505980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.505989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.505996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.506010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.506017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.506026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.809 [2024-05-13 20:39:50.506034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.809 [2024-05-13 20:39:50.506043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506329] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb34720 was disconnected and freed. reset controller. 00:26:34.810 [2024-05-13 20:39:50.506898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.506987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.506994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.810 [2024-05-13 20:39:50.507126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.810 [2024-05-13 20:39:50.507135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.811 [2024-05-13 20:39:50.507549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.811 [2024-05-13 20:39:50.507556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.812 [2024-05-13 20:39:50.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.812 [2024-05-13 20:39:50.507985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.812 [2024-05-13 20:39:50.508026] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa68240 was disconnected and freed. reset controller. 00:26:34.813 [2024-05-13 20:39:50.508097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.813 [2024-05-13 20:39:50.508131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6cdb0 (9): Bad file descriptor 00:26:34.813 [2024-05-13 20:39:50.508163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x573610 is same with the state(5) to be set 00:26:34.813 [2024-05-13 20:39:50.508257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc118c0 is same with the state(5) to be set 00:26:34.813 [2024-05-13 20:39:50.508359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a5f0 is same with the state(5) to be set 00:26:34.813 [2024-05-13 20:39:50.508443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.813 [2024-05-13 20:39:50.508495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.813 [2024-05-13 20:39:50.508501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc364d0 is same with the state(5) to be set 00:26:34.814 [2024-05-13 20:39:50.508521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37380 is same with the state(5) to be set 00:26:34.814 [2024-05-13 20:39:50.508601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bab0 is same with the state(5) to be set 00:26:34.814 [2024-05-13 20:39:50.508693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b0d0 is same with the state(5) to be set 00:26:34.814 [2024-05-13 20:39:50.508769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983a0 is same with the state(5) to be set 00:26:34.814 [2024-05-13 20:39:50.508851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.814 [2024-05-13 20:39:50.508903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.508909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc11350 is same with the state(5) to be set 00:26:34.814 [2024-05-13 20:39:50.510168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.814 [2024-05-13 20:39:50.510182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.510195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.814 [2024-05-13 20:39:50.510203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.510214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.814 [2024-05-13 20:39:50.510223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.510233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.814 [2024-05-13 20:39:50.510242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.510255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.814 [2024-05-13 20:39:50.510263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.814 [2024-05-13 20:39:50.510274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.815 [2024-05-13 20:39:50.510742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.815 [2024-05-13 20:39:50.510751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.510984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.510993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.511000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.511009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.511016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.511026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.511033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.511042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.511049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.515992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.515999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.516008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.816 [2024-05-13 20:39:50.516016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.816 [2024-05-13 20:39:50.516092] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb33390 was disconnected and freed. reset controller. 00:26:34.816 [2024-05-13 20:39:50.518707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:34.816 [2024-05-13 20:39:50.518741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:34.816 [2024-05-13 20:39:50.518759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b0d0 (9): Bad file descriptor 00:26:34.816 [2024-05-13 20:39:50.518771] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc37380 (9): Bad file descriptor 00:26:34.816 [2024-05-13 20:39:50.518811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x573610 (9): Bad file descriptor 00:26:34.816 [2024-05-13 20:39:50.518833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc118c0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.518854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2a5f0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.518867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc364d0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.518882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9bab0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.518899] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa983a0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.518911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc11350 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.520612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:34.817 [2024-05-13 20:39:50.520637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:34.817 [2024-05-13 20:39:50.521116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.521549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.521587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6cdb0 with addr=10.0.0.2, port=4420 00:26:34.817 [2024-05-13 20:39:50.521600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6cdb0 is same with the state(5) to be set 00:26:34.817 [2024-05-13 20:39:50.523011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.523281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.523291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc37380 with addr=10.0.0.2, port=4420 00:26:34.817 [2024-05-13 20:39:50.523299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37380 is same with the state(5) to be set 00:26:34.817 [2024-05-13 20:39:50.523615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.524098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.524111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7b0d0 with addr=10.0.0.2, port=4420 00:26:34.817 [2024-05-13 20:39:50.524121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b0d0 is same with the state(5) to be set 00:26:34.817 [2024-05-13 20:39:50.524626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.524884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.524896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc364d0 with addr=10.0.0.2, port=4420 00:26:34.817 [2024-05-13 20:39:50.524906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc364d0 is same with the state(5) to be set 00:26:34.817 [2024-05-13 20:39:50.525044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.525385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.817 [2024-05-13 20:39:50.525395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa983a0 with addr=10.0.0.2, port=4420 00:26:34.817 [2024-05-13 20:39:50.525402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983a0 is same with the state(5) to be set 00:26:34.817 [2024-05-13 20:39:50.525414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6cdb0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.525758] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.817 [2024-05-13 20:39:50.525803] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.817 [2024-05-13 20:39:50.525839] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.817 [2024-05-13 20:39:50.525888] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:34.817 [2024-05-13 20:39:50.525908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc37380 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.525918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b0d0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.525927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc364d0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.525936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa983a0 (9): Bad file descriptor 00:26:34.817 [2024-05-13 20:39:50.525944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.817 [2024-05-13 20:39:50.525951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.817 [2024-05-13 20:39:50.525958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.817 [2024-05-13 20:39:50.526063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.817 [2024-05-13 20:39:50.526074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:34.817 [2024-05-13 20:39:50.526081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:34.817 [2024-05-13 20:39:50.526088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:34.817 [2024-05-13 20:39:50.526098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:34.817 [2024-05-13 20:39:50.526104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:34.817 [2024-05-13 20:39:50.526111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:34.817 [2024-05-13 20:39:50.526122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:34.817 [2024-05-13 20:39:50.526128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:34.817 [2024-05-13 20:39:50.526134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:34.817 [2024-05-13 20:39:50.526146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:34.817 [2024-05-13 20:39:50.526152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:34.817 [2024-05-13 20:39:50.526159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:34.817 [2024-05-13 20:39:50.526201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.817 [2024-05-13 20:39:50.526209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.817 [2024-05-13 20:39:50.526215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.817 [2024-05-13 20:39:50.526221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.817 [2024-05-13 20:39:50.528841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-05-13 20:39:50.528857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.817 [2024-05-13 20:39:50.528873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-05-13 20:39:50.528881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.817 [2024-05-13 20:39:50.528890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-05-13 20:39:50.528901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.817 [2024-05-13 20:39:50.528910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-05-13 20:39:50.528918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.817 [2024-05-13 20:39:50.528927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.817 [2024-05-13 20:39:50.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.817 [2024-05-13 20:39:50.528943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.528950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.528959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.528966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.528975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.528982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.528991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.528998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.818 [2024-05-13 20:39:50.529421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.818 [2024-05-13 20:39:50.529429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.819 [2024-05-13 20:39:50.529781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.819 [2024-05-13 20:39:50.529788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.529900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.529909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64340 is same with the state(5) to be set 00:26:34.820 [2024-05-13 20:39:50.531186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.820 [2024-05-13 20:39:50.531563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.820 [2024-05-13 20:39:50.531572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.531984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.531993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.532000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.532010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.532017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.532026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.532037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.821 [2024-05-13 20:39:50.532046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.821 [2024-05-13 20:39:50.532053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.532245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.532253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65840 is same with the state(5) to be set 00:26:34.822 [2024-05-13 20:39:50.533528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.822 [2024-05-13 20:39:50.533804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.822 [2024-05-13 20:39:50.533814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.533985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.533994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.823 [2024-05-13 20:39:50.534374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.823 [2024-05-13 20:39:50.534381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.534576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.534584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66d40 is same with the state(5) to be set 00:26:34.824 [2024-05-13 20:39:50.535839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.535983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.535992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.824 [2024-05-13 20:39:50.536179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.824 [2024-05-13 20:39:50.536188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.825 [2024-05-13 20:39:50.536615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.825 [2024-05-13 20:39:50.536622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.536883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.536891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb44950 is same with the state(5) to be set 00:26:34.826 [2024-05-13 20:39:50.539674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.539990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.539999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.826 [2024-05-13 20:39:50.540144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.826 [2024-05-13 20:39:50.540152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.827 [2024-05-13 20:39:50.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.827 [2024-05-13 20:39:50.540733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb45db0 is same with the state(5) to be set 00:26:34.827 [2024-05-13 20:39:50.542217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:34.827 [2024-05-13 20:39:50.542239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:34.827 [2024-05-13 20:39:50.542250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:34.827 [2024-05-13 20:39:50.542260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:34.827 [2024-05-13 20:39:50.542350] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.827 task offset: 17024 on job bdev=Nvme1n1 fails 00:26:34.827 00:26:34.827 Latency(us) 00:26:34.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.827 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme1n1 ended in about 0.92 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme1n1 : 0.92 144.74 9.05 69.65 0.00 295145.56 6635.52 249910.61 00:26:34.827 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme2n1 ended in about 0.94 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme2n1 : 0.94 203.35 12.71 67.78 0.00 228639.79 25668.27 242920.11 00:26:34.827 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme3n1 ended in about 0.95 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme3n1 : 0.95 201.17 12.57 67.06 0.00 226428.37 19988.48 267386.88 00:26:34.827 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme4n1 ended in about 0.95 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme4n1 : 0.95 201.82 12.61 67.27 0.00 220903.89 19551.57 251658.24 00:26:34.827 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme5n1 ended in about 0.97 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme5n1 : 0.97 132.63 8.29 66.31 0.00 292859.73 20206.93 267386.88 00:26:34.827 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme6n1 ended in about 0.97 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme6n1 : 0.97 132.31 8.27 66.15 0.00 287397.83 23046.83 253405.87 00:26:34.827 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme7n1 ended in about 0.97 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme7n1 : 0.97 197.99 12.37 66.00 0.00 211357.12 13216.43 255153.49 00:26:34.827 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme8n1 ended in about 0.95 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme8n1 : 0.95 201.54 12.60 67.18 0.00 202309.23 10868.05 251658.24 00:26:34.827 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme9n1 ended in about 0.97 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme9n1 : 0.97 131.68 8.23 65.84 0.00 269982.44 20753.07 258648.75 00:26:34.827 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.827 Job: Nvme10n1 ended in about 0.98 seconds with error 00:26:34.827 Verification LBA range: start 0x0 length 0x400 00:26:34.827 Nvme10n1 : 0.98 131.16 8.20 65.58 0.00 264952.04 19770.03 272629.76 00:26:34.827 =================================================================================================================== 00:26:34.827 Total : 1678.40 104.90 668.83 0.00 245526.94 6635.52 272629.76 00:26:34.828 [2024-05-13 20:39:50.566680] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:34.828 [2024-05-13 20:39:50.566725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:34.828 [2024-05-13 20:39:50.567110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.567362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.567373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9bab0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.567383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bab0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.567763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.568038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.568047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x573610 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.568055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x573610 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.568436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.568809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.568818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc11350 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.568825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc11350 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.569201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.569426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.569436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc118c0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.569443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc118c0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.570780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.828 [2024-05-13 20:39:50.570794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:34.828 [2024-05-13 20:39:50.570803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:34.828 [2024-05-13 20:39:50.570813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:34.828 [2024-05-13 20:39:50.570822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:34.828 [2024-05-13 20:39:50.571153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.571486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.571497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2a5f0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.571504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a5f0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.571516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9bab0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.571527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x573610 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.571536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc11350 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.571545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc118c0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.571580] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.828 [2024-05-13 20:39:50.571591] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.828 [2024-05-13 20:39:50.571603] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.828 [2024-05-13 20:39:50.571614] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:34.828 [2024-05-13 20:39:50.572281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.572539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.572555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6cdb0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.572563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6cdb0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.572948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.573104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.573112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa983a0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.573120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa983a0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.573534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.574030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.574039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc364d0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.574046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc364d0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.574388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.574774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.574783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7b0d0 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.574790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b0d0 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.574925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.575324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.828 [2024-05-13 20:39:50.575334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc37380 with addr=10.0.0.2, port=4420 00:26:34.828 [2024-05-13 20:39:50.575341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37380 is same with the state(5) to be set 00:26:34.828 [2024-05-13 20:39:50.575351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2a5f0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.575360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6cdb0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.575548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa983a0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.575557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc364d0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.575566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7b0d0 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.575574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc37380 (9): Bad file descriptor 00:26:34.828 [2024-05-13 20:39:50.575585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:34.828 [2024-05-13 20:39:50.575737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:34.828 [2024-05-13 20:39:50.575744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:34.828 [2024-05-13 20:39:50.575772] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.828 [2024-05-13 20:39:50.575796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.089 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:35.089 20:39:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3166382 00:26:36.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3166382) - No such process 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.033 rmmod nvme_tcp 00:26:36.033 rmmod nvme_fabrics 00:26:36.033 rmmod nvme_keyring 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.033 20:39:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.959 20:39:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.959 00:26:37.959 real 0m7.608s 00:26:37.959 user 0m17.936s 00:26:37.959 sys 0m1.190s 00:26:37.959 20:39:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:37.959 20:39:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.959 ************************************ 00:26:37.959 END TEST nvmf_shutdown_tc3 00:26:37.959 ************************************ 00:26:38.221 20:39:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:38.221 00:26:38.221 real 0m33.306s 00:26:38.221 user 1m16.172s 00:26:38.221 sys 0m9.757s 00:26:38.221 20:39:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:38.221 20:39:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:38.221 ************************************ 00:26:38.221 END TEST nvmf_shutdown 00:26:38.221 ************************************ 00:26:38.221 20:39:53 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:26:38.221 20:39:53 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.221 20:39:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.221 20:39:54 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:26:38.221 20:39:54 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:38.221 20:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.221 20:39:54 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:26:38.221 20:39:54 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:38.221 20:39:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:38.221 20:39:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:38.221 20:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.221 ************************************ 00:26:38.221 START TEST nvmf_multicontroller 00:26:38.221 ************************************ 00:26:38.221 20:39:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:38.482 * Looking for test storage... 00:26:38.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.482 20:39:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.629 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:46.630 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:46.630 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:46.630 Found net devices under 0000:31:00.0: cvl_0_0 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:46.630 Found net devices under 0000:31:00.1: cvl_0_1 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.630 20:40:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:46.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:26:46.630 00:26:46.630 --- 10.0.0.2 ping statistics --- 00:26:46.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.630 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:26:46.630 00:26:46.630 --- 10.0.0.1 ping statistics --- 00:26:46.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.630 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3171797 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3171797 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3171797 ']' 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:46.630 20:40:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.630 [2024-05-13 20:40:02.293288] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:46.630 [2024-05-13 20:40:02.293349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.630 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.630 [2024-05-13 20:40:02.386886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:46.630 [2024-05-13 20:40:02.478233] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.630 [2024-05-13 20:40:02.478290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.630 [2024-05-13 20:40:02.478298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.630 [2024-05-13 20:40:02.478306] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.630 [2024-05-13 20:40:02.478321] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.630 [2024-05-13 20:40:02.478471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.630 [2024-05-13 20:40:02.478735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.631 [2024-05-13 20:40:02.478740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.205 [2024-05-13 20:40:03.116361] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.205 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 Malloc0 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 [2024-05-13 20:40:03.189029] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:47.467 [2024-05-13 20:40:03.189249] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 [2024-05-13 20:40:03.201167] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 Malloc1 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3172038 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.467 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3172038 /var/tmp/bdevperf.sock 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3172038 ']' 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:47.468 20:40:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.409 NVMe0n1 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.409 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.670 1 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.670 request: 00:26:48.670 { 00:26:48.670 "name": "NVMe0", 00:26:48.670 "trtype": "tcp", 00:26:48.670 "traddr": "10.0.0.2", 00:26:48.670 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:48.670 "hostaddr": "10.0.0.2", 00:26:48.670 "hostsvcid": "60000", 00:26:48.670 "adrfam": "ipv4", 00:26:48.670 "trsvcid": "4420", 00:26:48.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.670 "method": "bdev_nvme_attach_controller", 00:26:48.670 "req_id": 1 00:26:48.670 } 00:26:48.670 Got JSON-RPC error response 00:26:48.670 response: 00:26:48.670 { 00:26:48.670 "code": -114, 00:26:48.670 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:48.670 } 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.670 request: 00:26:48.670 { 00:26:48.670 "name": "NVMe0", 00:26:48.670 "trtype": "tcp", 00:26:48.670 "traddr": "10.0.0.2", 00:26:48.670 "hostaddr": "10.0.0.2", 00:26:48.670 "hostsvcid": "60000", 00:26:48.670 "adrfam": "ipv4", 00:26:48.670 "trsvcid": "4420", 00:26:48.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:48.670 "method": "bdev_nvme_attach_controller", 00:26:48.670 "req_id": 1 00:26:48.670 } 00:26:48.670 Got JSON-RPC error response 00:26:48.670 response: 00:26:48.670 { 00:26:48.670 "code": -114, 00:26:48.670 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:48.670 } 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.670 request: 00:26:48.670 { 00:26:48.670 "name": "NVMe0", 00:26:48.670 "trtype": "tcp", 00:26:48.670 "traddr": "10.0.0.2", 00:26:48.670 "hostaddr": "10.0.0.2", 00:26:48.670 "hostsvcid": "60000", 00:26:48.670 "adrfam": "ipv4", 00:26:48.670 "trsvcid": "4420", 00:26:48.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.670 "multipath": "disable", 00:26:48.670 "method": "bdev_nvme_attach_controller", 00:26:48.670 "req_id": 1 00:26:48.670 } 00:26:48.670 Got JSON-RPC error response 00:26:48.670 response: 00:26:48.670 { 00:26:48.670 "code": -114, 00:26:48.670 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:48.670 } 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:48.670 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.671 request: 00:26:48.671 { 00:26:48.671 "name": "NVMe0", 00:26:48.671 "trtype": "tcp", 00:26:48.671 "traddr": "10.0.0.2", 00:26:48.671 "hostaddr": "10.0.0.2", 00:26:48.671 "hostsvcid": "60000", 00:26:48.671 "adrfam": "ipv4", 00:26:48.671 "trsvcid": "4420", 00:26:48.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.671 "multipath": "failover", 00:26:48.671 "method": "bdev_nvme_attach_controller", 00:26:48.671 "req_id": 1 00:26:48.671 } 00:26:48.671 Got JSON-RPC error response 00:26:48.671 response: 00:26:48.671 { 00:26:48.671 "code": -114, 00:26:48.671 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:48.671 } 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.671 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.671 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.932 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:48.932 20:40:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:50.316 0 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3172038 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3172038 ']' 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3172038 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3172038 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3172038' 00:26:50.316 killing process with pid 3172038 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3172038 00:26:50.316 20:40:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3172038 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:26:50.316 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:26:50.316 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:50.316 [2024-05-13 20:40:03.323790] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:50.316 [2024-05-13 20:40:03.323871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172038 ] 00:26:50.316 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.316 [2024-05-13 20:40:03.390592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.316 [2024-05-13 20:40:03.455407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.316 [2024-05-13 20:40:04.785334] bdev.c:4555:bdev_name_add: *ERROR*: Bdev name 5f199ee7-aee9-44ef-ae61-fe1aa7f0f199 already exists 00:26:50.316 [2024-05-13 20:40:04.785366] bdev.c:7672:bdev_register: *ERROR*: Unable to add uuid:5f199ee7-aee9-44ef-ae61-fe1aa7f0f199 alias for bdev NVMe1n1 00:26:50.317 [2024-05-13 20:40:04.785377] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:50.317 Running I/O for 1 seconds... 00:26:50.317 00:26:50.317 Latency(us) 00:26:50.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.317 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:50.317 NVMe0n1 : 1.00 20275.82 79.20 0.00 0.00 6296.32 3986.77 11523.41 00:26:50.317 =================================================================================================================== 00:26:50.317 Total : 20275.82 79.20 0.00 0.00 6296.32 3986.77 11523.41 00:26:50.317 Received shutdown signal, test time was about 1.000000 seconds 00:26:50.317 00:26:50.317 Latency(us) 00:26:50.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.317 =================================================================================================================== 00:26:50.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.317 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.317 rmmod nvme_tcp 00:26:50.317 rmmod nvme_fabrics 00:26:50.317 rmmod nvme_keyring 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3171797 ']' 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3171797 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3171797 ']' 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3171797 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:50.317 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3171797 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3171797' 00:26:50.579 killing process with pid 3171797 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3171797 00:26:50.579 [2024-05-13 20:40:06.287893] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3171797 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.579 20:40:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.120 20:40:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.120 00:26:53.120 real 0m14.413s 00:26:53.120 user 0m17.189s 00:26:53.120 sys 0m6.698s 00:26:53.120 20:40:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:53.120 20:40:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:53.120 ************************************ 00:26:53.120 END TEST nvmf_multicontroller 00:26:53.120 ************************************ 00:26:53.120 20:40:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:53.120 20:40:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:53.120 20:40:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:53.120 20:40:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.120 ************************************ 00:26:53.120 START TEST nvmf_aer 00:26:53.120 ************************************ 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:53.120 * Looking for test storage... 00:26:53.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.120 20:40:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.259 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:01.260 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:01.260 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:01.260 Found net devices under 0000:31:00.0: cvl_0_0 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:01.260 Found net devices under 0000:31:00.1: cvl_0_1 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:27:01.260 00:27:01.260 --- 10.0.0.2 ping statistics --- 00:27:01.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.260 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:27:01.260 00:27:01.260 --- 10.0.0.1 ping statistics --- 00:27:01.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.260 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.260 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3177181 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3177181 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3177181 ']' 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.261 20:40:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:01.261 [2024-05-13 20:40:16.697153] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:01.261 [2024-05-13 20:40:16.697213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.261 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.261 [2024-05-13 20:40:16.775243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.261 [2024-05-13 20:40:16.851022] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.261 [2024-05-13 20:40:16.851061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.261 [2024-05-13 20:40:16.851068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.261 [2024-05-13 20:40:16.851074] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.261 [2024-05-13 20:40:16.851080] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.261 [2024-05-13 20:40:16.851218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.261 [2024-05-13 20:40:16.851330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.261 [2024-05-13 20:40:16.851458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.261 [2024-05-13 20:40:16.851461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.835 [2024-05-13 20:40:17.532939] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.835 Malloc0 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.835 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.835 [2024-05-13 20:40:17.592025] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:01.836 [2024-05-13 20:40:17.592276] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:01.836 [ 00:27:01.836 { 00:27:01.836 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:01.836 "subtype": "Discovery", 00:27:01.836 "listen_addresses": [], 00:27:01.836 "allow_any_host": true, 00:27:01.836 "hosts": [] 00:27:01.836 }, 00:27:01.836 { 00:27:01.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.836 "subtype": "NVMe", 00:27:01.836 "listen_addresses": [ 00:27:01.836 { 00:27:01.836 "trtype": "TCP", 00:27:01.836 "adrfam": "IPv4", 00:27:01.836 "traddr": "10.0.0.2", 00:27:01.836 "trsvcid": "4420" 00:27:01.836 } 00:27:01.836 ], 00:27:01.836 "allow_any_host": true, 00:27:01.836 "hosts": [], 00:27:01.836 "serial_number": "SPDK00000000000001", 00:27:01.836 "model_number": "SPDK bdev Controller", 00:27:01.836 "max_namespaces": 2, 00:27:01.836 "min_cntlid": 1, 00:27:01.836 "max_cntlid": 65519, 00:27:01.836 "namespaces": [ 00:27:01.836 { 00:27:01.836 "nsid": 1, 00:27:01.836 "bdev_name": "Malloc0", 00:27:01.836 "name": "Malloc0", 00:27:01.836 "nguid": "1A2F60F6C3824A3A869F9C3ED24A4C85", 00:27:01.836 "uuid": "1a2f60f6-c382-4a3a-869f-9c3ed24a4c85" 00:27:01.836 } 00:27:01.836 ] 00:27:01.836 } 00:27:01.836 ] 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3177518 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:01.836 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:27:01.836 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.098 Malloc1 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.098 Asynchronous Event Request test 00:27:02.098 Attaching to 10.0.0.2 00:27:02.098 Attached to 10.0.0.2 00:27:02.098 Registering asynchronous event callbacks... 00:27:02.098 Starting namespace attribute notice tests for all controllers... 00:27:02.098 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:02.098 aer_cb - Changed Namespace 00:27:02.098 Cleaning up... 00:27:02.098 [ 00:27:02.098 { 00:27:02.098 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:02.098 "subtype": "Discovery", 00:27:02.098 "listen_addresses": [], 00:27:02.098 "allow_any_host": true, 00:27:02.098 "hosts": [] 00:27:02.098 }, 00:27:02.098 { 00:27:02.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.098 "subtype": "NVMe", 00:27:02.098 "listen_addresses": [ 00:27:02.098 { 00:27:02.098 "trtype": "TCP", 00:27:02.098 "adrfam": "IPv4", 00:27:02.098 "traddr": "10.0.0.2", 00:27:02.098 "trsvcid": "4420" 00:27:02.098 } 00:27:02.098 ], 00:27:02.098 "allow_any_host": true, 00:27:02.098 "hosts": [], 00:27:02.098 "serial_number": "SPDK00000000000001", 00:27:02.098 "model_number": "SPDK bdev Controller", 00:27:02.098 "max_namespaces": 2, 00:27:02.098 "min_cntlid": 1, 00:27:02.098 "max_cntlid": 65519, 00:27:02.098 "namespaces": [ 00:27:02.098 { 00:27:02.098 "nsid": 1, 00:27:02.098 "bdev_name": "Malloc0", 00:27:02.098 "name": "Malloc0", 00:27:02.098 "nguid": "1A2F60F6C3824A3A869F9C3ED24A4C85", 00:27:02.098 "uuid": "1a2f60f6-c382-4a3a-869f-9c3ed24a4c85" 00:27:02.098 }, 00:27:02.098 { 00:27:02.098 "nsid": 2, 00:27:02.098 "bdev_name": "Malloc1", 00:27:02.098 "name": "Malloc1", 00:27:02.098 "nguid": "F1B001A0751A470A9330622D91D7C2AE", 00:27:02.098 "uuid": "f1b001a0-751a-470a-9330-622d91d7c2ae" 00:27:02.098 } 00:27:02.098 ] 00:27:02.098 } 00:27:02.098 ] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3177518 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.098 20:40:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.098 rmmod nvme_tcp 00:27:02.098 rmmod nvme_fabrics 00:27:02.098 rmmod nvme_keyring 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3177181 ']' 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3177181 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3177181 ']' 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3177181 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:02.098 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3177181 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3177181' 00:27:02.360 killing process with pid 3177181 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3177181 00:27:02.360 [2024-05-13 20:40:18.067820] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3177181 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.360 20:40:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.914 20:40:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.914 00:27:04.914 real 0m11.684s 00:27:04.914 user 0m7.708s 00:27:04.914 sys 0m6.263s 00:27:04.914 20:40:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:04.914 20:40:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:04.914 ************************************ 00:27:04.914 END TEST nvmf_aer 00:27:04.914 ************************************ 00:27:04.914 20:40:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:04.914 20:40:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:04.914 20:40:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:04.914 20:40:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.914 ************************************ 00:27:04.914 START TEST nvmf_async_init 00:27:04.914 ************************************ 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:04.914 * Looking for test storage... 00:27:04.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.914 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5aaa31b05eee4ef7b5bd8783596c63e7 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.915 20:40:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.177 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:13.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:13.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:13.178 Found net devices under 0000:31:00.0: cvl_0_0 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:13.178 Found net devices under 0000:31:00.1: cvl_0_1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:27:13.178 00:27:13.178 --- 10.0.0.2 ping statistics --- 00:27:13.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.178 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:27:13.178 00:27:13.178 --- 10.0.0.1 ping statistics --- 00:27:13.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.178 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3182209 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3182209 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3182209 ']' 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:13.178 20:40:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.178 [2024-05-13 20:40:28.550408] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:13.178 [2024-05-13 20:40:28.550473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.178 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.178 [2024-05-13 20:40:28.628000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.178 [2024-05-13 20:40:28.700676] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.178 [2024-05-13 20:40:28.700716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.178 [2024-05-13 20:40:28.700724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.178 [2024-05-13 20:40:28.700731] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.178 [2024-05-13 20:40:28.700736] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.178 [2024-05-13 20:40:28.700754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.439 [2024-05-13 20:40:29.359379] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.439 null0 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.439 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5aaa31b05eee4ef7b5bd8783596c63e7 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.700 [2024-05-13 20:40:29.419470] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:13.700 [2024-05-13 20:40:29.419648] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.700 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.961 nvme0n1 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.961 [ 00:27:13.961 { 00:27:13.961 "name": "nvme0n1", 00:27:13.961 "aliases": [ 00:27:13.961 "5aaa31b0-5eee-4ef7-b5bd-8783596c63e7" 00:27:13.961 ], 00:27:13.961 "product_name": "NVMe disk", 00:27:13.961 "block_size": 512, 00:27:13.961 "num_blocks": 2097152, 00:27:13.961 "uuid": "5aaa31b0-5eee-4ef7-b5bd-8783596c63e7", 00:27:13.961 "assigned_rate_limits": { 00:27:13.961 "rw_ios_per_sec": 0, 00:27:13.961 "rw_mbytes_per_sec": 0, 00:27:13.961 "r_mbytes_per_sec": 0, 00:27:13.961 "w_mbytes_per_sec": 0 00:27:13.961 }, 00:27:13.961 "claimed": false, 00:27:13.961 "zoned": false, 00:27:13.961 "supported_io_types": { 00:27:13.961 "read": true, 00:27:13.961 "write": true, 00:27:13.961 "unmap": false, 00:27:13.961 "write_zeroes": true, 00:27:13.961 "flush": true, 00:27:13.961 "reset": true, 00:27:13.961 "compare": true, 00:27:13.961 "compare_and_write": true, 00:27:13.961 "abort": true, 00:27:13.961 "nvme_admin": true, 00:27:13.961 "nvme_io": true 00:27:13.961 }, 00:27:13.961 "memory_domains": [ 00:27:13.961 { 00:27:13.961 "dma_device_id": "system", 00:27:13.961 "dma_device_type": 1 00:27:13.961 } 00:27:13.961 ], 00:27:13.961 "driver_specific": { 00:27:13.961 "nvme": [ 00:27:13.961 { 00:27:13.961 "trid": { 00:27:13.961 "trtype": "TCP", 00:27:13.961 "adrfam": "IPv4", 00:27:13.961 "traddr": "10.0.0.2", 00:27:13.961 "trsvcid": "4420", 00:27:13.961 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:13.961 }, 00:27:13.961 "ctrlr_data": { 00:27:13.961 "cntlid": 1, 00:27:13.961 "vendor_id": "0x8086", 00:27:13.961 "model_number": "SPDK bdev Controller", 00:27:13.961 "serial_number": "00000000000000000000", 00:27:13.961 "firmware_revision": "24.05", 00:27:13.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.961 "oacs": { 00:27:13.961 "security": 0, 00:27:13.961 "format": 0, 00:27:13.961 "firmware": 0, 00:27:13.961 "ns_manage": 0 00:27:13.961 }, 00:27:13.961 "multi_ctrlr": true, 00:27:13.961 "ana_reporting": false 00:27:13.961 }, 00:27:13.961 "vs": { 00:27:13.961 "nvme_version": "1.3" 00:27:13.961 }, 00:27:13.961 "ns_data": { 00:27:13.961 "id": 1, 00:27:13.961 "can_share": true 00:27:13.961 } 00:27:13.961 } 00:27:13.961 ], 00:27:13.961 "mp_policy": "active_passive" 00:27:13.961 } 00:27:13.961 } 00:27:13.961 ] 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.961 [2024-05-13 20:40:29.689441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.961 [2024-05-13 20:40:29.689499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4abe0 (9): Bad file descriptor 00:27:13.961 [2024-05-13 20:40:29.821405] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.961 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.961 [ 00:27:13.961 { 00:27:13.961 "name": "nvme0n1", 00:27:13.961 "aliases": [ 00:27:13.961 "5aaa31b0-5eee-4ef7-b5bd-8783596c63e7" 00:27:13.961 ], 00:27:13.961 "product_name": "NVMe disk", 00:27:13.961 "block_size": 512, 00:27:13.961 "num_blocks": 2097152, 00:27:13.961 "uuid": "5aaa31b0-5eee-4ef7-b5bd-8783596c63e7", 00:27:13.961 "assigned_rate_limits": { 00:27:13.961 "rw_ios_per_sec": 0, 00:27:13.961 "rw_mbytes_per_sec": 0, 00:27:13.961 "r_mbytes_per_sec": 0, 00:27:13.961 "w_mbytes_per_sec": 0 00:27:13.961 }, 00:27:13.961 "claimed": false, 00:27:13.961 "zoned": false, 00:27:13.961 "supported_io_types": { 00:27:13.961 "read": true, 00:27:13.961 "write": true, 00:27:13.962 "unmap": false, 00:27:13.962 "write_zeroes": true, 00:27:13.962 "flush": true, 00:27:13.962 "reset": true, 00:27:13.962 "compare": true, 00:27:13.962 "compare_and_write": true, 00:27:13.962 "abort": true, 00:27:13.962 "nvme_admin": true, 00:27:13.962 "nvme_io": true 00:27:13.962 }, 00:27:13.962 "memory_domains": [ 00:27:13.962 { 00:27:13.962 "dma_device_id": "system", 00:27:13.962 "dma_device_type": 1 00:27:13.962 } 00:27:13.962 ], 00:27:13.962 "driver_specific": { 00:27:13.962 "nvme": [ 00:27:13.962 { 00:27:13.962 "trid": { 00:27:13.962 "trtype": "TCP", 00:27:13.962 "adrfam": "IPv4", 00:27:13.962 "traddr": "10.0.0.2", 00:27:13.962 "trsvcid": "4420", 00:27:13.962 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:13.962 }, 00:27:13.962 "ctrlr_data": { 00:27:13.962 "cntlid": 2, 00:27:13.962 "vendor_id": "0x8086", 00:27:13.962 "model_number": "SPDK bdev Controller", 00:27:13.962 "serial_number": "00000000000000000000", 00:27:13.962 "firmware_revision": "24.05", 00:27:13.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.962 "oacs": { 00:27:13.962 "security": 0, 00:27:13.962 "format": 0, 00:27:13.962 "firmware": 0, 00:27:13.962 "ns_manage": 0 00:27:13.962 }, 00:27:13.962 "multi_ctrlr": true, 00:27:13.962 "ana_reporting": false 00:27:13.962 }, 00:27:13.962 "vs": { 00:27:13.962 "nvme_version": "1.3" 00:27:13.962 }, 00:27:13.962 "ns_data": { 00:27:13.962 "id": 1, 00:27:13.962 "can_share": true 00:27:13.962 } 00:27:13.962 } 00:27:13.962 ], 00:27:13.962 "mp_policy": "active_passive" 00:27:13.962 } 00:27:13.962 } 00:27:13.962 ] 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mx54dQqIh6 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mx54dQqIh6 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.962 [2024-05-13 20:40:29.886050] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:13.962 [2024-05-13 20:40:29.886163] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mx54dQqIh6 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:13.962 [2024-05-13 20:40:29.898075] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mx54dQqIh6 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.962 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.222 [2024-05-13 20:40:29.910111] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:14.222 [2024-05-13 20:40:29.910151] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:14.222 nvme0n1 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.222 [ 00:27:14.222 { 00:27:14.222 "name": "nvme0n1", 00:27:14.222 "aliases": [ 00:27:14.222 "5aaa31b0-5eee-4ef7-b5bd-8783596c63e7" 00:27:14.222 ], 00:27:14.222 "product_name": "NVMe disk", 00:27:14.222 "block_size": 512, 00:27:14.222 "num_blocks": 2097152, 00:27:14.222 "uuid": "5aaa31b0-5eee-4ef7-b5bd-8783596c63e7", 00:27:14.222 "assigned_rate_limits": { 00:27:14.222 "rw_ios_per_sec": 0, 00:27:14.222 "rw_mbytes_per_sec": 0, 00:27:14.222 "r_mbytes_per_sec": 0, 00:27:14.222 "w_mbytes_per_sec": 0 00:27:14.222 }, 00:27:14.222 "claimed": false, 00:27:14.222 "zoned": false, 00:27:14.222 "supported_io_types": { 00:27:14.222 "read": true, 00:27:14.222 "write": true, 00:27:14.222 "unmap": false, 00:27:14.222 "write_zeroes": true, 00:27:14.222 "flush": true, 00:27:14.222 "reset": true, 00:27:14.222 "compare": true, 00:27:14.222 "compare_and_write": true, 00:27:14.222 "abort": true, 00:27:14.222 "nvme_admin": true, 00:27:14.222 "nvme_io": true 00:27:14.222 }, 00:27:14.222 "memory_domains": [ 00:27:14.222 { 00:27:14.222 "dma_device_id": "system", 00:27:14.222 "dma_device_type": 1 00:27:14.222 } 00:27:14.222 ], 00:27:14.222 "driver_specific": { 00:27:14.222 "nvme": [ 00:27:14.222 { 00:27:14.222 "trid": { 00:27:14.222 "trtype": "TCP", 00:27:14.222 "adrfam": "IPv4", 00:27:14.222 "traddr": "10.0.0.2", 00:27:14.222 "trsvcid": "4421", 00:27:14.222 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:14.222 }, 00:27:14.222 "ctrlr_data": { 00:27:14.222 "cntlid": 3, 00:27:14.222 "vendor_id": "0x8086", 00:27:14.222 "model_number": "SPDK bdev Controller", 00:27:14.222 "serial_number": "00000000000000000000", 00:27:14.222 "firmware_revision": "24.05", 00:27:14.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.222 "oacs": { 00:27:14.222 "security": 0, 00:27:14.222 "format": 0, 00:27:14.222 "firmware": 0, 00:27:14.222 "ns_manage": 0 00:27:14.222 }, 00:27:14.222 "multi_ctrlr": true, 00:27:14.222 "ana_reporting": false 00:27:14.222 }, 00:27:14.222 "vs": { 00:27:14.222 "nvme_version": "1.3" 00:27:14.222 }, 00:27:14.222 "ns_data": { 00:27:14.222 "id": 1, 00:27:14.222 "can_share": true 00:27:14.222 } 00:27:14.222 } 00:27:14.222 ], 00:27:14.222 "mp_policy": "active_passive" 00:27:14.222 } 00:27:14.222 } 00:27:14.222 ] 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.222 20:40:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:14.222 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.222 20:40:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.mx54dQqIh6 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.223 rmmod nvme_tcp 00:27:14.223 rmmod nvme_fabrics 00:27:14.223 rmmod nvme_keyring 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3182209 ']' 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3182209 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3182209 ']' 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3182209 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3182209 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3182209' 00:27:14.223 killing process with pid 3182209 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3182209 00:27:14.223 [2024-05-13 20:40:30.134231] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:14.223 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3182209 00:27:14.223 [2024-05-13 20:40:30.134264] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:14.223 [2024-05-13 20:40:30.134273] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.484 20:40:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.398 20:40:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.398 00:27:16.398 real 0m11.953s 00:27:16.398 user 0m4.148s 00:27:16.398 sys 0m6.226s 00:27:16.398 20:40:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:16.398 20:40:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:16.398 ************************************ 00:27:16.398 END TEST nvmf_async_init 00:27:16.398 ************************************ 00:27:16.660 20:40:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:16.660 20:40:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:16.660 20:40:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:16.660 20:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.660 ************************************ 00:27:16.660 START TEST dma 00:27:16.660 ************************************ 00:27:16.660 20:40:32 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:16.660 * Looking for test storage... 00:27:16.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.660 20:40:32 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.660 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.661 20:40:32 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.661 20:40:32 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.661 20:40:32 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.661 20:40:32 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.661 20:40:32 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.661 20:40:32 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.661 20:40:32 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:16.661 20:40:32 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.661 20:40:32 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.661 20:40:32 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:16.661 20:40:32 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:16.661 00:27:16.661 real 0m0.130s 00:27:16.661 user 0m0.061s 00:27:16.661 sys 0m0.078s 00:27:16.661 20:40:32 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:16.661 20:40:32 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:16.661 ************************************ 00:27:16.661 END TEST dma 00:27:16.661 ************************************ 00:27:16.661 20:40:32 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:16.661 20:40:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:16.661 20:40:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:16.661 20:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.922 ************************************ 00:27:16.922 START TEST nvmf_identify 00:27:16.922 ************************************ 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:16.922 * Looking for test storage... 00:27:16.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.922 20:40:32 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.923 20:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:25.064 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:25.064 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.064 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:25.065 Found net devices under 0000:31:00.0: cvl_0_0 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:25.065 Found net devices under 0000:31:00.1: cvl_0_1 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:25.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:27:25.065 00:27:25.065 --- 10.0.0.2 ping statistics --- 00:27:25.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.065 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:27:25.065 00:27:25.065 --- 10.0.0.1 ping statistics --- 00:27:25.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.065 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3187153 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3187153 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3187153 ']' 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:25.065 20:40:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.065 [2024-05-13 20:40:40.760282] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:25.065 [2024-05-13 20:40:40.760354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.065 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.065 [2024-05-13 20:40:40.841905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.065 [2024-05-13 20:40:40.919521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.065 [2024-05-13 20:40:40.919565] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.065 [2024-05-13 20:40:40.919573] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.065 [2024-05-13 20:40:40.919579] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.065 [2024-05-13 20:40:40.919585] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.065 [2024-05-13 20:40:40.919754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.065 [2024-05-13 20:40:40.919889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.065 [2024-05-13 20:40:40.920024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.065 [2024-05-13 20:40:40.920026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.635 [2024-05-13 20:40:41.549733] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.635 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.897 Malloc0 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.897 [2024-05-13 20:40:41.649103] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:25.897 [2024-05-13 20:40:41.649367] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.897 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:25.898 [ 00:27:25.898 { 00:27:25.898 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.898 "subtype": "Discovery", 00:27:25.898 "listen_addresses": [ 00:27:25.898 { 00:27:25.898 "trtype": "TCP", 00:27:25.898 "adrfam": "IPv4", 00:27:25.898 "traddr": "10.0.0.2", 00:27:25.898 "trsvcid": "4420" 00:27:25.898 } 00:27:25.898 ], 00:27:25.898 "allow_any_host": true, 00:27:25.898 "hosts": [] 00:27:25.898 }, 00:27:25.898 { 00:27:25.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.898 "subtype": "NVMe", 00:27:25.898 "listen_addresses": [ 00:27:25.898 { 00:27:25.898 "trtype": "TCP", 00:27:25.898 "adrfam": "IPv4", 00:27:25.898 "traddr": "10.0.0.2", 00:27:25.898 "trsvcid": "4420" 00:27:25.898 } 00:27:25.898 ], 00:27:25.898 "allow_any_host": true, 00:27:25.898 "hosts": [], 00:27:25.898 "serial_number": "SPDK00000000000001", 00:27:25.898 "model_number": "SPDK bdev Controller", 00:27:25.898 "max_namespaces": 32, 00:27:25.898 "min_cntlid": 1, 00:27:25.898 "max_cntlid": 65519, 00:27:25.898 "namespaces": [ 00:27:25.898 { 00:27:25.898 "nsid": 1, 00:27:25.898 "bdev_name": "Malloc0", 00:27:25.898 "name": "Malloc0", 00:27:25.898 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:25.898 "eui64": "ABCDEF0123456789", 00:27:25.898 "uuid": "68808220-8821-43f3-a07b-69a4d23e8aa8" 00:27:25.898 } 00:27:25.898 ] 00:27:25.898 } 00:27:25.898 ] 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.898 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:25.898 [2024-05-13 20:40:41.710504] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:25.898 [2024-05-13 20:40:41.710545] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187313 ] 00:27:25.898 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.898 [2024-05-13 20:40:41.741968] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:25.898 [2024-05-13 20:40:41.742009] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:25.898 [2024-05-13 20:40:41.742014] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:25.898 [2024-05-13 20:40:41.742026] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:25.898 [2024-05-13 20:40:41.742033] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:25.898 [2024-05-13 20:40:41.745348] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:25.898 [2024-05-13 20:40:41.745379] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe2cc30 0 00:27:25.898 [2024-05-13 20:40:41.753324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:25.898 [2024-05-13 20:40:41.753334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:25.898 [2024-05-13 20:40:41.753338] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:25.898 [2024-05-13 20:40:41.753342] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:25.898 [2024-05-13 20:40:41.753376] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.753382] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.753385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.898 [2024-05-13 20:40:41.753397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:25.898 [2024-05-13 20:40:41.753412] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.898 [2024-05-13 20:40:41.761323] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.898 [2024-05-13 20:40:41.761332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.898 [2024-05-13 20:40:41.761336] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761340] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.898 [2024-05-13 20:40:41.761349] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:25.898 [2024-05-13 20:40:41.761356] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:25.898 [2024-05-13 20:40:41.761361] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:25.898 [2024-05-13 20:40:41.761377] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.898 [2024-05-13 20:40:41.761392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.898 [2024-05-13 20:40:41.761405] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.898 [2024-05-13 20:40:41.761642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.898 [2024-05-13 20:40:41.761649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.898 [2024-05-13 20:40:41.761653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761656] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.898 [2024-05-13 20:40:41.761664] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:25.898 [2024-05-13 20:40:41.761671] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:25.898 [2024-05-13 20:40:41.761682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.898 [2024-05-13 20:40:41.761696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.898 [2024-05-13 20:40:41.761707] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.898 [2024-05-13 20:40:41.761916] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.898 [2024-05-13 20:40:41.761922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.898 [2024-05-13 20:40:41.761925] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761929] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.898 [2024-05-13 20:40:41.761934] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:25.898 [2024-05-13 20:40:41.761942] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:25.898 [2024-05-13 20:40:41.761948] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761952] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.761955] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.898 [2024-05-13 20:40:41.761962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.898 [2024-05-13 20:40:41.761971] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.898 [2024-05-13 20:40:41.762201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.898 [2024-05-13 20:40:41.762207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.898 [2024-05-13 20:40:41.762210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.762214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.898 [2024-05-13 20:40:41.762219] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:25.898 [2024-05-13 20:40:41.762228] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.762231] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.762235] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.898 [2024-05-13 20:40:41.762241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.898 [2024-05-13 20:40:41.762251] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.898 [2024-05-13 20:40:41.762463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.898 [2024-05-13 20:40:41.762469] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.898 [2024-05-13 20:40:41.762473] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.898 [2024-05-13 20:40:41.762476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.898 [2024-05-13 20:40:41.762481] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:25.899 [2024-05-13 20:40:41.762486] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:25.899 [2024-05-13 20:40:41.762493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:25.899 [2024-05-13 20:40:41.762601] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:25.899 [2024-05-13 20:40:41.762605] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:25.899 [2024-05-13 20:40:41.762613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.762617] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.762620] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.762627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.899 [2024-05-13 20:40:41.762637] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.899 [2024-05-13 20:40:41.762813] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.899 [2024-05-13 20:40:41.762819] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.899 [2024-05-13 20:40:41.762822] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.762826] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.899 [2024-05-13 20:40:41.762831] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:25.899 [2024-05-13 20:40:41.762840] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.762843] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.762847] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.762853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.899 [2024-05-13 20:40:41.762863] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.899 [2024-05-13 20:40:41.763070] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.899 [2024-05-13 20:40:41.763076] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.899 [2024-05-13 20:40:41.763079] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.763083] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.899 [2024-05-13 20:40:41.763087] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:25.899 [2024-05-13 20:40:41.763091] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:25.899 [2024-05-13 20:40:41.763099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:25.899 [2024-05-13 20:40:41.763106] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:25.899 [2024-05-13 20:40:41.763115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.763119] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.763126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.899 [2024-05-13 20:40:41.763136] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.899 [2024-05-13 20:40:41.763340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:25.899 [2024-05-13 20:40:41.763348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:25.899 [2024-05-13 20:40:41.763351] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.763355] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe2cc30): datao=0, datal=4096, cccid=0 00:27:25.899 [2024-05-13 20:40:41.763362] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe94980) on tqpair(0xe2cc30): expected_datao=0, payload_size=4096 00:27:25.899 [2024-05-13 20:40:41.763366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.763395] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.763400] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808323] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.899 [2024-05-13 20:40:41.808334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.899 [2024-05-13 20:40:41.808337] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.899 [2024-05-13 20:40:41.808349] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:25.899 [2024-05-13 20:40:41.808358] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:25.899 [2024-05-13 20:40:41.808362] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:25.899 [2024-05-13 20:40:41.808368] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:25.899 [2024-05-13 20:40:41.808372] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:25.899 [2024-05-13 20:40:41.808377] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:25.899 [2024-05-13 20:40:41.808385] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:25.899 [2024-05-13 20:40:41.808392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808396] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.808407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:25.899 [2024-05-13 20:40:41.808419] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.899 [2024-05-13 20:40:41.808604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.899 [2024-05-13 20:40:41.808611] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.899 [2024-05-13 20:40:41.808614] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94980) on tqpair=0xe2cc30 00:27:25.899 [2024-05-13 20:40:41.808625] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808629] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808632] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.808638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.899 [2024-05-13 20:40:41.808644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.808657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.899 [2024-05-13 20:40:41.808663] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808667] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.808679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.899 [2024-05-13 20:40:41.808685] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.808697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.899 [2024-05-13 20:40:41.808702] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:25.899 [2024-05-13 20:40:41.808712] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:25.899 [2024-05-13 20:40:41.808719] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.808722] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe2cc30) 00:27:25.899 [2024-05-13 20:40:41.808729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.899 [2024-05-13 20:40:41.808741] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94980, cid 0, qid 0 00:27:25.899 [2024-05-13 20:40:41.808746] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94ae0, cid 1, qid 0 00:27:25.899 [2024-05-13 20:40:41.808751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94c40, cid 2, qid 0 00:27:25.899 [2024-05-13 20:40:41.808755] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:25.899 [2024-05-13 20:40:41.808760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94f00, cid 4, qid 0 00:27:25.899 [2024-05-13 20:40:41.809009] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.899 [2024-05-13 20:40:41.809015] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.899 [2024-05-13 20:40:41.809019] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.899 [2024-05-13 20:40:41.809023] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94f00) on tqpair=0xe2cc30 00:27:25.899 [2024-05-13 20:40:41.809027] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:25.899 [2024-05-13 20:40:41.809032] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:25.900 [2024-05-13 20:40:41.809043] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe2cc30) 00:27:25.900 [2024-05-13 20:40:41.809053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.900 [2024-05-13 20:40:41.809063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94f00, cid 4, qid 0 00:27:25.900 [2024-05-13 20:40:41.809278] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:25.900 [2024-05-13 20:40:41.809285] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:25.900 [2024-05-13 20:40:41.809288] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809292] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe2cc30): datao=0, datal=4096, cccid=4 00:27:25.900 [2024-05-13 20:40:41.809296] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe94f00) on tqpair(0xe2cc30): expected_datao=0, payload_size=4096 00:27:25.900 [2024-05-13 20:40:41.809300] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809339] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809343] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809522] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.900 [2024-05-13 20:40:41.809528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.900 [2024-05-13 20:40:41.809531] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809535] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94f00) on tqpair=0xe2cc30 00:27:25.900 [2024-05-13 20:40:41.809546] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:25.900 [2024-05-13 20:40:41.809567] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe2cc30) 00:27:25.900 [2024-05-13 20:40:41.809577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.900 [2024-05-13 20:40:41.809584] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809591] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe2cc30) 00:27:25.900 [2024-05-13 20:40:41.809597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.900 [2024-05-13 20:40:41.809610] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94f00, cid 4, qid 0 00:27:25.900 [2024-05-13 20:40:41.809615] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95060, cid 5, qid 0 00:27:25.900 [2024-05-13 20:40:41.809871] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:25.900 [2024-05-13 20:40:41.809878] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:25.900 [2024-05-13 20:40:41.809881] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809885] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe2cc30): datao=0, datal=1024, cccid=4 00:27:25.900 [2024-05-13 20:40:41.809889] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe94f00) on tqpair(0xe2cc30): expected_datao=0, payload_size=1024 00:27:25.900 [2024-05-13 20:40:41.809893] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809900] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809903] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809909] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:25.900 [2024-05-13 20:40:41.809915] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:25.900 [2024-05-13 20:40:41.809918] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:25.900 [2024-05-13 20:40:41.809922] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95060) on tqpair=0xe2cc30 00:27:26.167 [2024-05-13 20:40:41.851325] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.167 [2024-05-13 20:40:41.851338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.167 [2024-05-13 20:40:41.851342] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851347] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94f00) on tqpair=0xe2cc30 00:27:26.167 [2024-05-13 20:40:41.851359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe2cc30) 00:27:26.167 [2024-05-13 20:40:41.851370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.167 [2024-05-13 20:40:41.851386] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94f00, cid 4, qid 0 00:27:26.167 [2024-05-13 20:40:41.851580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.167 [2024-05-13 20:40:41.851590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.167 [2024-05-13 20:40:41.851593] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851597] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe2cc30): datao=0, datal=3072, cccid=4 00:27:26.167 [2024-05-13 20:40:41.851601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe94f00) on tqpair(0xe2cc30): expected_datao=0, payload_size=3072 00:27:26.167 [2024-05-13 20:40:41.851605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851612] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851616] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.167 [2024-05-13 20:40:41.851800] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.167 [2024-05-13 20:40:41.851803] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851807] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94f00) on tqpair=0xe2cc30 00:27:26.167 [2024-05-13 20:40:41.851815] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.851819] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe2cc30) 00:27:26.167 [2024-05-13 20:40:41.851826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.167 [2024-05-13 20:40:41.851839] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94f00, cid 4, qid 0 00:27:26.167 [2024-05-13 20:40:41.852054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.167 [2024-05-13 20:40:41.852061] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.167 [2024-05-13 20:40:41.852064] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.852067] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe2cc30): datao=0, datal=8, cccid=4 00:27:26.167 [2024-05-13 20:40:41.852072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe94f00) on tqpair(0xe2cc30): expected_datao=0, payload_size=8 00:27:26.167 [2024-05-13 20:40:41.852076] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.852082] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.852086] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.892497] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.167 [2024-05-13 20:40:41.892507] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.167 [2024-05-13 20:40:41.892510] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.167 [2024-05-13 20:40:41.892514] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94f00) on tqpair=0xe2cc30 00:27:26.167 ===================================================== 00:27:26.167 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:26.167 ===================================================== 00:27:26.167 Controller Capabilities/Features 00:27:26.167 ================================ 00:27:26.167 Vendor ID: 0000 00:27:26.167 Subsystem Vendor ID: 0000 00:27:26.167 Serial Number: .................... 00:27:26.167 Model Number: ........................................ 00:27:26.167 Firmware Version: 24.05 00:27:26.167 Recommended Arb Burst: 0 00:27:26.167 IEEE OUI Identifier: 00 00 00 00:27:26.167 Multi-path I/O 00:27:26.167 May have multiple subsystem ports: No 00:27:26.167 May have multiple controllers: No 00:27:26.167 Associated with SR-IOV VF: No 00:27:26.167 Max Data Transfer Size: 131072 00:27:26.167 Max Number of Namespaces: 0 00:27:26.167 Max Number of I/O Queues: 1024 00:27:26.167 NVMe Specification Version (VS): 1.3 00:27:26.167 NVMe Specification Version (Identify): 1.3 00:27:26.167 Maximum Queue Entries: 128 00:27:26.167 Contiguous Queues Required: Yes 00:27:26.167 Arbitration Mechanisms Supported 00:27:26.167 Weighted Round Robin: Not Supported 00:27:26.167 Vendor Specific: Not Supported 00:27:26.167 Reset Timeout: 15000 ms 00:27:26.167 Doorbell Stride: 4 bytes 00:27:26.167 NVM Subsystem Reset: Not Supported 00:27:26.167 Command Sets Supported 00:27:26.167 NVM Command Set: Supported 00:27:26.167 Boot Partition: Not Supported 00:27:26.167 Memory Page Size Minimum: 4096 bytes 00:27:26.167 Memory Page Size Maximum: 4096 bytes 00:27:26.167 Persistent Memory Region: Not Supported 00:27:26.167 Optional Asynchronous Events Supported 00:27:26.167 Namespace Attribute Notices: Not Supported 00:27:26.167 Firmware Activation Notices: Not Supported 00:27:26.167 ANA Change Notices: Not Supported 00:27:26.167 PLE Aggregate Log Change Notices: Not Supported 00:27:26.167 LBA Status Info Alert Notices: Not Supported 00:27:26.167 EGE Aggregate Log Change Notices: Not Supported 00:27:26.167 Normal NVM Subsystem Shutdown event: Not Supported 00:27:26.167 Zone Descriptor Change Notices: Not Supported 00:27:26.167 Discovery Log Change Notices: Supported 00:27:26.167 Controller Attributes 00:27:26.167 128-bit Host Identifier: Not Supported 00:27:26.167 Non-Operational Permissive Mode: Not Supported 00:27:26.167 NVM Sets: Not Supported 00:27:26.167 Read Recovery Levels: Not Supported 00:27:26.167 Endurance Groups: Not Supported 00:27:26.167 Predictable Latency Mode: Not Supported 00:27:26.167 Traffic Based Keep ALive: Not Supported 00:27:26.167 Namespace Granularity: Not Supported 00:27:26.167 SQ Associations: Not Supported 00:27:26.167 UUID List: Not Supported 00:27:26.167 Multi-Domain Subsystem: Not Supported 00:27:26.167 Fixed Capacity Management: Not Supported 00:27:26.167 Variable Capacity Management: Not Supported 00:27:26.167 Delete Endurance Group: Not Supported 00:27:26.167 Delete NVM Set: Not Supported 00:27:26.167 Extended LBA Formats Supported: Not Supported 00:27:26.167 Flexible Data Placement Supported: Not Supported 00:27:26.167 00:27:26.167 Controller Memory Buffer Support 00:27:26.167 ================================ 00:27:26.167 Supported: No 00:27:26.167 00:27:26.167 Persistent Memory Region Support 00:27:26.167 ================================ 00:27:26.167 Supported: No 00:27:26.167 00:27:26.167 Admin Command Set Attributes 00:27:26.167 ============================ 00:27:26.167 Security Send/Receive: Not Supported 00:27:26.167 Format NVM: Not Supported 00:27:26.167 Firmware Activate/Download: Not Supported 00:27:26.167 Namespace Management: Not Supported 00:27:26.167 Device Self-Test: Not Supported 00:27:26.167 Directives: Not Supported 00:27:26.167 NVMe-MI: Not Supported 00:27:26.167 Virtualization Management: Not Supported 00:27:26.167 Doorbell Buffer Config: Not Supported 00:27:26.167 Get LBA Status Capability: Not Supported 00:27:26.167 Command & Feature Lockdown Capability: Not Supported 00:27:26.167 Abort Command Limit: 1 00:27:26.167 Async Event Request Limit: 4 00:27:26.167 Number of Firmware Slots: N/A 00:27:26.167 Firmware Slot 1 Read-Only: N/A 00:27:26.167 Firmware Activation Without Reset: N/A 00:27:26.167 Multiple Update Detection Support: N/A 00:27:26.167 Firmware Update Granularity: No Information Provided 00:27:26.167 Per-Namespace SMART Log: No 00:27:26.167 Asymmetric Namespace Access Log Page: Not Supported 00:27:26.167 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:26.168 Command Effects Log Page: Not Supported 00:27:26.168 Get Log Page Extended Data: Supported 00:27:26.168 Telemetry Log Pages: Not Supported 00:27:26.168 Persistent Event Log Pages: Not Supported 00:27:26.168 Supported Log Pages Log Page: May Support 00:27:26.168 Commands Supported & Effects Log Page: Not Supported 00:27:26.168 Feature Identifiers & Effects Log Page:May Support 00:27:26.168 NVMe-MI Commands & Effects Log Page: May Support 00:27:26.168 Data Area 4 for Telemetry Log: Not Supported 00:27:26.168 Error Log Page Entries Supported: 128 00:27:26.168 Keep Alive: Not Supported 00:27:26.168 00:27:26.168 NVM Command Set Attributes 00:27:26.168 ========================== 00:27:26.168 Submission Queue Entry Size 00:27:26.168 Max: 1 00:27:26.168 Min: 1 00:27:26.168 Completion Queue Entry Size 00:27:26.168 Max: 1 00:27:26.168 Min: 1 00:27:26.168 Number of Namespaces: 0 00:27:26.168 Compare Command: Not Supported 00:27:26.168 Write Uncorrectable Command: Not Supported 00:27:26.168 Dataset Management Command: Not Supported 00:27:26.168 Write Zeroes Command: Not Supported 00:27:26.168 Set Features Save Field: Not Supported 00:27:26.168 Reservations: Not Supported 00:27:26.168 Timestamp: Not Supported 00:27:26.168 Copy: Not Supported 00:27:26.168 Volatile Write Cache: Not Present 00:27:26.168 Atomic Write Unit (Normal): 1 00:27:26.168 Atomic Write Unit (PFail): 1 00:27:26.168 Atomic Compare & Write Unit: 1 00:27:26.168 Fused Compare & Write: Supported 00:27:26.168 Scatter-Gather List 00:27:26.168 SGL Command Set: Supported 00:27:26.168 SGL Keyed: Supported 00:27:26.168 SGL Bit Bucket Descriptor: Not Supported 00:27:26.168 SGL Metadata Pointer: Not Supported 00:27:26.168 Oversized SGL: Not Supported 00:27:26.168 SGL Metadata Address: Not Supported 00:27:26.168 SGL Offset: Supported 00:27:26.168 Transport SGL Data Block: Not Supported 00:27:26.168 Replay Protected Memory Block: Not Supported 00:27:26.168 00:27:26.168 Firmware Slot Information 00:27:26.168 ========================= 00:27:26.168 Active slot: 0 00:27:26.168 00:27:26.168 00:27:26.168 Error Log 00:27:26.168 ========= 00:27:26.168 00:27:26.168 Active Namespaces 00:27:26.168 ================= 00:27:26.168 Discovery Log Page 00:27:26.168 ================== 00:27:26.168 Generation Counter: 2 00:27:26.168 Number of Records: 2 00:27:26.168 Record Format: 0 00:27:26.168 00:27:26.168 Discovery Log Entry 0 00:27:26.168 ---------------------- 00:27:26.168 Transport Type: 3 (TCP) 00:27:26.168 Address Family: 1 (IPv4) 00:27:26.168 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:26.168 Entry Flags: 00:27:26.168 Duplicate Returned Information: 1 00:27:26.168 Explicit Persistent Connection Support for Discovery: 1 00:27:26.168 Transport Requirements: 00:27:26.168 Secure Channel: Not Required 00:27:26.168 Port ID: 0 (0x0000) 00:27:26.168 Controller ID: 65535 (0xffff) 00:27:26.168 Admin Max SQ Size: 128 00:27:26.168 Transport Service Identifier: 4420 00:27:26.168 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:26.168 Transport Address: 10.0.0.2 00:27:26.168 Discovery Log Entry 1 00:27:26.168 ---------------------- 00:27:26.168 Transport Type: 3 (TCP) 00:27:26.168 Address Family: 1 (IPv4) 00:27:26.168 Subsystem Type: 2 (NVM Subsystem) 00:27:26.168 Entry Flags: 00:27:26.168 Duplicate Returned Information: 0 00:27:26.168 Explicit Persistent Connection Support for Discovery: 0 00:27:26.168 Transport Requirements: 00:27:26.168 Secure Channel: Not Required 00:27:26.168 Port ID: 0 (0x0000) 00:27:26.168 Controller ID: 65535 (0xffff) 00:27:26.168 Admin Max SQ Size: 128 00:27:26.168 Transport Service Identifier: 4420 00:27:26.168 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:26.168 Transport Address: 10.0.0.2 [2024-05-13 20:40:41.892602] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:26.168 [2024-05-13 20:40:41.892615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.168 [2024-05-13 20:40:41.892621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.168 [2024-05-13 20:40:41.892627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.168 [2024-05-13 20:40:41.892633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.168 [2024-05-13 20:40:41.892644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.892648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.892651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.168 [2024-05-13 20:40:41.892660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.168 [2024-05-13 20:40:41.892673] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.168 [2024-05-13 20:40:41.892794] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.168 [2024-05-13 20:40:41.892800] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.168 [2024-05-13 20:40:41.892804] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.892807] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.168 [2024-05-13 20:40:41.892814] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.892818] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.892821] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.168 [2024-05-13 20:40:41.892827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.168 [2024-05-13 20:40:41.892840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.168 [2024-05-13 20:40:41.893018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.168 [2024-05-13 20:40:41.893024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.168 [2024-05-13 20:40:41.893027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893031] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.168 [2024-05-13 20:40:41.893035] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:26.168 [2024-05-13 20:40:41.893040] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:26.168 [2024-05-13 20:40:41.893049] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893053] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.168 [2024-05-13 20:40:41.893063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.168 [2024-05-13 20:40:41.893073] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.168 [2024-05-13 20:40:41.893256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.168 [2024-05-13 20:40:41.893262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.168 [2024-05-13 20:40:41.893265] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893269] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.168 [2024-05-13 20:40:41.893279] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893282] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.168 [2024-05-13 20:40:41.893292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.168 [2024-05-13 20:40:41.893302] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.168 [2024-05-13 20:40:41.893501] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.168 [2024-05-13 20:40:41.893508] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.168 [2024-05-13 20:40:41.893511] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893515] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.168 [2024-05-13 20:40:41.893524] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893531] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.168 [2024-05-13 20:40:41.893541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.168 [2024-05-13 20:40:41.893551] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.168 [2024-05-13 20:40:41.893759] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.168 [2024-05-13 20:40:41.893765] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.168 [2024-05-13 20:40:41.893768] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.168 [2024-05-13 20:40:41.893772] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.168 [2024-05-13 20:40:41.893781] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.893785] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.893788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.893795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.893804] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.893997] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.894003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.894007] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894010] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.894020] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894024] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894027] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.894034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.894043] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.894229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.894235] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.894239] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894242] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.894252] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894256] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894259] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.894266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.894275] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.894469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.894476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.894479] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894483] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.894492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894496] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894502] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.894508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.894518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.894726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.894732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.894735] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894739] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.894748] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894752] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.894762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.894771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.894964] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.894970] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.894973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.894986] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894990] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.894993] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.895000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.895009] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.895204] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.895210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.895214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.895217] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.895227] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.895231] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.895234] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe2cc30) 00:27:26.169 [2024-05-13 20:40:41.895240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.169 [2024-05-13 20:40:41.895250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe94da0, cid 3, qid 0 00:27:26.169 [2024-05-13 20:40:41.899321] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.899329] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.899332] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.899336] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe94da0) on tqpair=0xe2cc30 00:27:26.169 [2024-05-13 20:40:41.899344] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:27:26.169 00:27:26.169 20:40:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:26.169 [2024-05-13 20:40:41.934453] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:26.169 [2024-05-13 20:40:41.934493] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187318 ] 00:27:26.169 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.169 [2024-05-13 20:40:41.972629] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:26.169 [2024-05-13 20:40:41.972669] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:26.169 [2024-05-13 20:40:41.972674] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:26.169 [2024-05-13 20:40:41.972686] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:26.169 [2024-05-13 20:40:41.972694] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:26.169 [2024-05-13 20:40:41.972980] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:26.169 [2024-05-13 20:40:41.973005] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2419c30 0 00:27:26.169 [2024-05-13 20:40:41.987324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:26.169 [2024-05-13 20:40:41.987337] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:26.169 [2024-05-13 20:40:41.987341] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:26.169 [2024-05-13 20:40:41.987344] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:26.169 [2024-05-13 20:40:41.987376] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.987381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.987385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.169 [2024-05-13 20:40:41.987397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:26.169 [2024-05-13 20:40:41.987413] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.169 [2024-05-13 20:40:41.995323] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.169 [2024-05-13 20:40:41.995334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.169 [2024-05-13 20:40:41.995337] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.169 [2024-05-13 20:40:41.995342] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.169 [2024-05-13 20:40:41.995353] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:26.169 [2024-05-13 20:40:41.995360] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:26.169 [2024-05-13 20:40:41.995365] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:26.170 [2024-05-13 20:40:41.995380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995384] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995388] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.995395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.995408] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.995489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.995499] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.995503] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995507] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.995515] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:26.170 [2024-05-13 20:40:41.995522] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:26.170 [2024-05-13 20:40:41.995529] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995532] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995536] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.995543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.995554] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.995620] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.995627] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.995630] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995634] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.995639] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:26.170 [2024-05-13 20:40:41.995648] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:26.170 [2024-05-13 20:40:41.995654] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995661] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.995668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.995678] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.995740] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.995747] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.995750] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995754] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.995759] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:26.170 [2024-05-13 20:40:41.995769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995773] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995776] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.995783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.995793] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.995852] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.995859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.995862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.995866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.995873] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:26.170 [2024-05-13 20:40:41.995878] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:26.170 [2024-05-13 20:40:41.995886] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:26.170 [2024-05-13 20:40:41.995991] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:26.170 [2024-05-13 20:40:41.995995] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:26.170 [2024-05-13 20:40:41.996002] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996006] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996009] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.996016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.996026] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.996092] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.996099] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.996102] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996106] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.996112] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:26.170 [2024-05-13 20:40:41.996121] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996124] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996128] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.996134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.996144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.996209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.996216] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.996219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996223] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.996228] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:26.170 [2024-05-13 20:40:41.996232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:26.170 [2024-05-13 20:40:41.996240] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:26.170 [2024-05-13 20:40:41.996247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:26.170 [2024-05-13 20:40:41.996256] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996260] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.170 [2024-05-13 20:40:41.996266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.170 [2024-05-13 20:40:41.996278] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.170 [2024-05-13 20:40:41.996381] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.170 [2024-05-13 20:40:41.996392] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.170 [2024-05-13 20:40:41.996397] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996401] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=4096, cccid=0 00:27:26.170 [2024-05-13 20:40:41.996406] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2481980) on tqpair(0x2419c30): expected_datao=0, payload_size=4096 00:27:26.170 [2024-05-13 20:40:41.996410] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996418] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996422] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996446] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.170 [2024-05-13 20:40:41.996452] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.170 [2024-05-13 20:40:41.996456] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.170 [2024-05-13 20:40:41.996459] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.170 [2024-05-13 20:40:41.996468] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:26.170 [2024-05-13 20:40:41.996475] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:26.170 [2024-05-13 20:40:41.996480] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:26.170 [2024-05-13 20:40:41.996484] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:26.170 [2024-05-13 20:40:41.996488] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:26.170 [2024-05-13 20:40:41.996493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996502] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996508] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996512] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996515] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:26.171 [2024-05-13 20:40:41.996533] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.171 [2024-05-13 20:40:41.996602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.171 [2024-05-13 20:40:41.996609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.171 [2024-05-13 20:40:41.996613] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996617] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481980) on tqpair=0x2419c30 00:27:26.171 [2024-05-13 20:40:41.996624] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996628] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996631] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.171 [2024-05-13 20:40:41.996643] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996647] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.171 [2024-05-13 20:40:41.996665] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996669] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.171 [2024-05-13 20:40:41.996684] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.171 [2024-05-13 20:40:41.996701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996711] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996718] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996721] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.171 [2024-05-13 20:40:41.996739] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481980, cid 0, qid 0 00:27:26.171 [2024-05-13 20:40:41.996746] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481ae0, cid 1, qid 0 00:27:26.171 [2024-05-13 20:40:41.996753] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481c40, cid 2, qid 0 00:27:26.171 [2024-05-13 20:40:41.996761] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.171 [2024-05-13 20:40:41.996766] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.171 [2024-05-13 20:40:41.996846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.171 [2024-05-13 20:40:41.996853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.171 [2024-05-13 20:40:41.996856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.171 [2024-05-13 20:40:41.996865] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:26.171 [2024-05-13 20:40:41.996870] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996878] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996884] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.996890] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.996903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:26.171 [2024-05-13 20:40:41.996915] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.171 [2024-05-13 20:40:41.996984] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.171 [2024-05-13 20:40:41.996990] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.171 [2024-05-13 20:40:41.996994] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.996998] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.171 [2024-05-13 20:40:41.997053] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.997062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.997069] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997073] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.997079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.171 [2024-05-13 20:40:41.997090] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.171 [2024-05-13 20:40:41.997162] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.171 [2024-05-13 20:40:41.997168] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.171 [2024-05-13 20:40:41.997172] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997176] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=4096, cccid=4 00:27:26.171 [2024-05-13 20:40:41.997180] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2481f00) on tqpair(0x2419c30): expected_datao=0, payload_size=4096 00:27:26.171 [2024-05-13 20:40:41.997185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997246] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997252] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997293] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.171 [2024-05-13 20:40:41.997300] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.171 [2024-05-13 20:40:41.997303] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997307] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.171 [2024-05-13 20:40:41.997322] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:26.171 [2024-05-13 20:40:41.997335] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.997344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:26.171 [2024-05-13 20:40:41.997351] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.171 [2024-05-13 20:40:41.997361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.171 [2024-05-13 20:40:41.997372] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.171 [2024-05-13 20:40:41.997454] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.171 [2024-05-13 20:40:41.997461] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.171 [2024-05-13 20:40:41.997464] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997468] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=4096, cccid=4 00:27:26.171 [2024-05-13 20:40:41.997474] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2481f00) on tqpair(0x2419c30): expected_datao=0, payload_size=4096 00:27:26.171 [2024-05-13 20:40:41.997478] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997508] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997514] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.171 [2024-05-13 20:40:41.997618] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.171 [2024-05-13 20:40:41.997625] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.997628] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997632] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.997645] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997654] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997661] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997665] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.997671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.997682] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.172 [2024-05-13 20:40:41.997751] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.172 [2024-05-13 20:40:41.997758] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.172 [2024-05-13 20:40:41.997761] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997765] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=4096, cccid=4 00:27:26.172 [2024-05-13 20:40:41.997769] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2481f00) on tqpair(0x2419c30): expected_datao=0, payload_size=4096 00:27:26.172 [2024-05-13 20:40:41.997773] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997802] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997807] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997889] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.172 [2024-05-13 20:40:41.997896] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.997899] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997903] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.997910] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997918] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997932] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997937] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997941] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:26.172 [2024-05-13 20:40:41.997948] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:26.172 [2024-05-13 20:40:41.997953] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:26.172 [2024-05-13 20:40:41.997969] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997973] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.997980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.997986] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997990] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.997993] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.997999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.172 [2024-05-13 20:40:41.998012] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.172 [2024-05-13 20:40:41.998018] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482060, cid 5, qid 0 00:27:26.172 [2024-05-13 20:40:41.998093] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.172 [2024-05-13 20:40:41.998100] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.998103] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998107] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.998114] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.172 [2024-05-13 20:40:41.998120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.998123] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482060) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.998137] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998140] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998157] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482060, cid 5, qid 0 00:27:26.172 [2024-05-13 20:40:41.998222] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.172 [2024-05-13 20:40:41.998229] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.998232] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998236] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482060) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.998245] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998249] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998265] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482060, cid 5, qid 0 00:27:26.172 [2024-05-13 20:40:41.998333] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.172 [2024-05-13 20:40:41.998340] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.998344] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998347] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482060) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.998359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998379] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482060, cid 5, qid 0 00:27:26.172 [2024-05-13 20:40:41.998442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.172 [2024-05-13 20:40:41.998448] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.172 [2024-05-13 20:40:41.998452] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998455] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482060) on tqpair=0x2419c30 00:27:26.172 [2024-05-13 20:40:41.998467] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998471] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998485] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998488] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998505] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998518] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998521] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2419c30) 00:27:26.172 [2024-05-13 20:40:41.998527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.172 [2024-05-13 20:40:41.998538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482060, cid 5, qid 0 00:27:26.172 [2024-05-13 20:40:41.998544] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481f00, cid 4, qid 0 00:27:26.172 [2024-05-13 20:40:41.998552] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24821c0, cid 6, qid 0 00:27:26.172 [2024-05-13 20:40:41.998558] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482320, cid 7, qid 0 00:27:26.172 [2024-05-13 20:40:41.998668] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.172 [2024-05-13 20:40:41.998677] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.172 [2024-05-13 20:40:41.998684] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998688] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=8192, cccid=5 00:27:26.172 [2024-05-13 20:40:41.998692] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2482060) on tqpair(0x2419c30): expected_datao=0, payload_size=8192 00:27:26.172 [2024-05-13 20:40:41.998696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998832] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998839] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.172 [2024-05-13 20:40:41.998856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.172 [2024-05-13 20:40:41.998860] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.172 [2024-05-13 20:40:41.998863] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=512, cccid=4 00:27:26.173 [2024-05-13 20:40:41.998867] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2481f00) on tqpair(0x2419c30): expected_datao=0, payload_size=512 00:27:26.173 [2024-05-13 20:40:41.998872] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998878] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998881] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998887] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.173 [2024-05-13 20:40:41.998892] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.173 [2024-05-13 20:40:41.998896] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998899] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=512, cccid=6 00:27:26.173 [2024-05-13 20:40:41.998903] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24821c0) on tqpair(0x2419c30): expected_datao=0, payload_size=512 00:27:26.173 [2024-05-13 20:40:41.998907] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998914] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998917] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:26.173 [2024-05-13 20:40:41.998928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:26.173 [2024-05-13 20:40:41.998932] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998935] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2419c30): datao=0, datal=4096, cccid=7 00:27:26.173 [2024-05-13 20:40:41.998939] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2482320) on tqpair(0x2419c30): expected_datao=0, payload_size=4096 00:27:26.173 [2024-05-13 20:40:41.998943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998950] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998953] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.173 [2024-05-13 20:40:41.998983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.173 [2024-05-13 20:40:41.998987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.998990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482060) on tqpair=0x2419c30 00:27:26.173 [2024-05-13 20:40:41.999003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.173 [2024-05-13 20:40:41.999009] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.173 [2024-05-13 20:40:41.999013] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.999016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481f00) on tqpair=0x2419c30 00:27:26.173 [2024-05-13 20:40:41.999025] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.173 [2024-05-13 20:40:41.999031] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.173 [2024-05-13 20:40:41.999035] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.999038] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24821c0) on tqpair=0x2419c30 00:27:26.173 [2024-05-13 20:40:41.999048] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.173 [2024-05-13 20:40:41.999054] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.173 [2024-05-13 20:40:41.999057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.173 [2024-05-13 20:40:41.999061] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482320) on tqpair=0x2419c30 00:27:26.173 ===================================================== 00:27:26.173 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.173 ===================================================== 00:27:26.173 Controller Capabilities/Features 00:27:26.173 ================================ 00:27:26.173 Vendor ID: 8086 00:27:26.173 Subsystem Vendor ID: 8086 00:27:26.173 Serial Number: SPDK00000000000001 00:27:26.173 Model Number: SPDK bdev Controller 00:27:26.173 Firmware Version: 24.05 00:27:26.173 Recommended Arb Burst: 6 00:27:26.173 IEEE OUI Identifier: e4 d2 5c 00:27:26.173 Multi-path I/O 00:27:26.173 May have multiple subsystem ports: Yes 00:27:26.173 May have multiple controllers: Yes 00:27:26.173 Associated with SR-IOV VF: No 00:27:26.173 Max Data Transfer Size: 131072 00:27:26.173 Max Number of Namespaces: 32 00:27:26.173 Max Number of I/O Queues: 127 00:27:26.173 NVMe Specification Version (VS): 1.3 00:27:26.173 NVMe Specification Version (Identify): 1.3 00:27:26.173 Maximum Queue Entries: 128 00:27:26.173 Contiguous Queues Required: Yes 00:27:26.173 Arbitration Mechanisms Supported 00:27:26.173 Weighted Round Robin: Not Supported 00:27:26.173 Vendor Specific: Not Supported 00:27:26.173 Reset Timeout: 15000 ms 00:27:26.173 Doorbell Stride: 4 bytes 00:27:26.173 NVM Subsystem Reset: Not Supported 00:27:26.173 Command Sets Supported 00:27:26.173 NVM Command Set: Supported 00:27:26.173 Boot Partition: Not Supported 00:27:26.173 Memory Page Size Minimum: 4096 bytes 00:27:26.173 Memory Page Size Maximum: 4096 bytes 00:27:26.173 Persistent Memory Region: Not Supported 00:27:26.173 Optional Asynchronous Events Supported 00:27:26.173 Namespace Attribute Notices: Supported 00:27:26.173 Firmware Activation Notices: Not Supported 00:27:26.173 ANA Change Notices: Not Supported 00:27:26.173 PLE Aggregate Log Change Notices: Not Supported 00:27:26.173 LBA Status Info Alert Notices: Not Supported 00:27:26.173 EGE Aggregate Log Change Notices: Not Supported 00:27:26.173 Normal NVM Subsystem Shutdown event: Not Supported 00:27:26.173 Zone Descriptor Change Notices: Not Supported 00:27:26.173 Discovery Log Change Notices: Not Supported 00:27:26.173 Controller Attributes 00:27:26.173 128-bit Host Identifier: Supported 00:27:26.173 Non-Operational Permissive Mode: Not Supported 00:27:26.173 NVM Sets: Not Supported 00:27:26.173 Read Recovery Levels: Not Supported 00:27:26.173 Endurance Groups: Not Supported 00:27:26.173 Predictable Latency Mode: Not Supported 00:27:26.173 Traffic Based Keep ALive: Not Supported 00:27:26.173 Namespace Granularity: Not Supported 00:27:26.173 SQ Associations: Not Supported 00:27:26.173 UUID List: Not Supported 00:27:26.173 Multi-Domain Subsystem: Not Supported 00:27:26.173 Fixed Capacity Management: Not Supported 00:27:26.173 Variable Capacity Management: Not Supported 00:27:26.173 Delete Endurance Group: Not Supported 00:27:26.173 Delete NVM Set: Not Supported 00:27:26.173 Extended LBA Formats Supported: Not Supported 00:27:26.173 Flexible Data Placement Supported: Not Supported 00:27:26.173 00:27:26.173 Controller Memory Buffer Support 00:27:26.173 ================================ 00:27:26.173 Supported: No 00:27:26.173 00:27:26.173 Persistent Memory Region Support 00:27:26.173 ================================ 00:27:26.173 Supported: No 00:27:26.173 00:27:26.173 Admin Command Set Attributes 00:27:26.173 ============================ 00:27:26.173 Security Send/Receive: Not Supported 00:27:26.173 Format NVM: Not Supported 00:27:26.173 Firmware Activate/Download: Not Supported 00:27:26.173 Namespace Management: Not Supported 00:27:26.173 Device Self-Test: Not Supported 00:27:26.173 Directives: Not Supported 00:27:26.173 NVMe-MI: Not Supported 00:27:26.173 Virtualization Management: Not Supported 00:27:26.173 Doorbell Buffer Config: Not Supported 00:27:26.173 Get LBA Status Capability: Not Supported 00:27:26.173 Command & Feature Lockdown Capability: Not Supported 00:27:26.173 Abort Command Limit: 4 00:27:26.173 Async Event Request Limit: 4 00:27:26.173 Number of Firmware Slots: N/A 00:27:26.173 Firmware Slot 1 Read-Only: N/A 00:27:26.173 Firmware Activation Without Reset: N/A 00:27:26.173 Multiple Update Detection Support: N/A 00:27:26.173 Firmware Update Granularity: No Information Provided 00:27:26.173 Per-Namespace SMART Log: No 00:27:26.173 Asymmetric Namespace Access Log Page: Not Supported 00:27:26.173 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:26.173 Command Effects Log Page: Supported 00:27:26.173 Get Log Page Extended Data: Supported 00:27:26.173 Telemetry Log Pages: Not Supported 00:27:26.173 Persistent Event Log Pages: Not Supported 00:27:26.173 Supported Log Pages Log Page: May Support 00:27:26.173 Commands Supported & Effects Log Page: Not Supported 00:27:26.173 Feature Identifiers & Effects Log Page:May Support 00:27:26.173 NVMe-MI Commands & Effects Log Page: May Support 00:27:26.173 Data Area 4 for Telemetry Log: Not Supported 00:27:26.173 Error Log Page Entries Supported: 128 00:27:26.173 Keep Alive: Supported 00:27:26.173 Keep Alive Granularity: 10000 ms 00:27:26.173 00:27:26.173 NVM Command Set Attributes 00:27:26.173 ========================== 00:27:26.173 Submission Queue Entry Size 00:27:26.173 Max: 64 00:27:26.173 Min: 64 00:27:26.173 Completion Queue Entry Size 00:27:26.173 Max: 16 00:27:26.173 Min: 16 00:27:26.173 Number of Namespaces: 32 00:27:26.173 Compare Command: Supported 00:27:26.173 Write Uncorrectable Command: Not Supported 00:27:26.173 Dataset Management Command: Supported 00:27:26.173 Write Zeroes Command: Supported 00:27:26.173 Set Features Save Field: Not Supported 00:27:26.173 Reservations: Supported 00:27:26.173 Timestamp: Not Supported 00:27:26.173 Copy: Supported 00:27:26.173 Volatile Write Cache: Present 00:27:26.173 Atomic Write Unit (Normal): 1 00:27:26.173 Atomic Write Unit (PFail): 1 00:27:26.173 Atomic Compare & Write Unit: 1 00:27:26.173 Fused Compare & Write: Supported 00:27:26.173 Scatter-Gather List 00:27:26.173 SGL Command Set: Supported 00:27:26.173 SGL Keyed: Supported 00:27:26.173 SGL Bit Bucket Descriptor: Not Supported 00:27:26.173 SGL Metadata Pointer: Not Supported 00:27:26.173 Oversized SGL: Not Supported 00:27:26.173 SGL Metadata Address: Not Supported 00:27:26.173 SGL Offset: Supported 00:27:26.173 Transport SGL Data Block: Not Supported 00:27:26.173 Replay Protected Memory Block: Not Supported 00:27:26.174 00:27:26.174 Firmware Slot Information 00:27:26.174 ========================= 00:27:26.174 Active slot: 1 00:27:26.174 Slot 1 Firmware Revision: 24.05 00:27:26.174 00:27:26.174 00:27:26.174 Commands Supported and Effects 00:27:26.174 ============================== 00:27:26.174 Admin Commands 00:27:26.174 -------------- 00:27:26.174 Get Log Page (02h): Supported 00:27:26.174 Identify (06h): Supported 00:27:26.174 Abort (08h): Supported 00:27:26.174 Set Features (09h): Supported 00:27:26.174 Get Features (0Ah): Supported 00:27:26.174 Asynchronous Event Request (0Ch): Supported 00:27:26.174 Keep Alive (18h): Supported 00:27:26.174 I/O Commands 00:27:26.174 ------------ 00:27:26.174 Flush (00h): Supported LBA-Change 00:27:26.174 Write (01h): Supported LBA-Change 00:27:26.174 Read (02h): Supported 00:27:26.174 Compare (05h): Supported 00:27:26.174 Write Zeroes (08h): Supported LBA-Change 00:27:26.174 Dataset Management (09h): Supported LBA-Change 00:27:26.174 Copy (19h): Supported LBA-Change 00:27:26.174 Unknown (79h): Supported LBA-Change 00:27:26.174 Unknown (7Ah): Supported 00:27:26.174 00:27:26.174 Error Log 00:27:26.174 ========= 00:27:26.174 00:27:26.174 Arbitration 00:27:26.174 =========== 00:27:26.174 Arbitration Burst: 1 00:27:26.174 00:27:26.174 Power Management 00:27:26.174 ================ 00:27:26.174 Number of Power States: 1 00:27:26.174 Current Power State: Power State #0 00:27:26.174 Power State #0: 00:27:26.174 Max Power: 0.00 W 00:27:26.174 Non-Operational State: Operational 00:27:26.174 Entry Latency: Not Reported 00:27:26.174 Exit Latency: Not Reported 00:27:26.174 Relative Read Throughput: 0 00:27:26.174 Relative Read Latency: 0 00:27:26.174 Relative Write Throughput: 0 00:27:26.174 Relative Write Latency: 0 00:27:26.174 Idle Power: Not Reported 00:27:26.174 Active Power: Not Reported 00:27:26.174 Non-Operational Permissive Mode: Not Supported 00:27:26.174 00:27:26.174 Health Information 00:27:26.174 ================== 00:27:26.174 Critical Warnings: 00:27:26.174 Available Spare Space: OK 00:27:26.174 Temperature: OK 00:27:26.174 Device Reliability: OK 00:27:26.174 Read Only: No 00:27:26.174 Volatile Memory Backup: OK 00:27:26.174 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:26.174 Temperature Threshold: [2024-05-13 20:40:41.999164] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:41.999169] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:41.999177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:41.999188] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2482320, cid 7, qid 0 00:27:26.174 [2024-05-13 20:40:41.999259] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:41.999266] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:41.999270] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:41.999273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2482320) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:41.999302] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:26.174 [2024-05-13 20:40:42.003319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.174 [2024-05-13 20:40:42.003328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.174 [2024-05-13 20:40:42.003334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.174 [2024-05-13 20:40:42.003340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.174 [2024-05-13 20:40:42.003348] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003352] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.003362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.003375] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.003447] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.003454] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.003458] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003461] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:42.003469] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003472] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003476] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.003483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.003496] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.003561] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.003567] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.003571] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003574] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:42.003580] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:26.174 [2024-05-13 20:40:42.003584] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:26.174 [2024-05-13 20:40:42.003593] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003600] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.003610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.003620] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.003682] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.003688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.003692] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003695] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:42.003706] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003713] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.003720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.003730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.003791] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.003797] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.003801] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:42.003815] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003819] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003822] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.003829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.003838] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.003903] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.003910] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.003913] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003917] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:42.003927] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.003934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.003941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.003950] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.004009] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.004016] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.004019] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.004023] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.174 [2024-05-13 20:40:42.004033] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.004037] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.174 [2024-05-13 20:40:42.004042] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.174 [2024-05-13 20:40:42.004049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.174 [2024-05-13 20:40:42.004059] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.174 [2024-05-13 20:40:42.004121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.174 [2024-05-13 20:40:42.004128] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.174 [2024-05-13 20:40:42.004131] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004135] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004145] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004148] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004152] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004227] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004234] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004237] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004241] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004252] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004256] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004259] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004275] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004350] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004354] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004364] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004389] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004450] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004457] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004460] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004464] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004474] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004482] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004559] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004573] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004583] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004590] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004606] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004665] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004679] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004689] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004693] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004696] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004712] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004770] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004780] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004784] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004794] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004798] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004801] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004817] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004878] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004885] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.004902] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.004909] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.004916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.004927] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.004986] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.004993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.004996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005000] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.005010] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005014] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005017] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.005024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.005034] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.005092] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.005099] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.005102] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005106] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.005116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.005130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.005140] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.005201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.005207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.005211] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.005224] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005228] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005231] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.005238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.005248] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.005306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.005319] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.005323] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.005337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.175 [2024-05-13 20:40:42.005351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.175 [2024-05-13 20:40:42.005362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.175 [2024-05-13 20:40:42.005422] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.175 [2024-05-13 20:40:42.005429] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.175 [2024-05-13 20:40:42.005432] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005436] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.175 [2024-05-13 20:40:42.005446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005450] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.175 [2024-05-13 20:40:42.005453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.005460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.005469] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.005530] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.005536] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.005540] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005544] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.005554] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005561] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.005568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.005577] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.005641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.005648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.005652] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005655] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.005665] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005669] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005672] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.005679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.005689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.005756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.005762] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.005766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005769] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.005779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.005794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.005803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.005864] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.005871] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.005874] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005878] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.005888] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005892] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005896] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.005902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.005912] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.005973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.005979] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.005982] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.005986] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.005996] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.006010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.006020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.006083] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.006090] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.006093] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006097] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.006108] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006112] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006115] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.006122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.006131] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.006192] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.006199] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.006202] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006205] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.006215] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006223] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.006229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.006239] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.006304] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.006311] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.006319] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006323] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.006333] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006337] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006341] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.006347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.006357] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.006419] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.006425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.006429] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006432] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.006442] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006450] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.006457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.006467] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.006528] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.176 [2024-05-13 20:40:42.006534] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.176 [2024-05-13 20:40:42.006537] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006541] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.176 [2024-05-13 20:40:42.006551] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006555] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.176 [2024-05-13 20:40:42.006558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.176 [2024-05-13 20:40:42.006565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.176 [2024-05-13 20:40:42.006575] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.176 [2024-05-13 20:40:42.006636] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.006642] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.006646] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006650] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.006660] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006664] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006667] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.006674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.006683] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.006744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.006752] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.006756] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006759] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.006770] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006773] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006777] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.006783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.006793] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.006852] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.006858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.006862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.006876] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006883] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.006890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.006899] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.006961] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.006967] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.006971] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006974] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.006984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.006992] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.006998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.007008] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.007069] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.007075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.007079] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.007082] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.007092] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.007096] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.007100] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.007106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.007116] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.007179] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.007186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.007191] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.007195] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.007206] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.007209] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.007213] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.007219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.007230] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.011325] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.011335] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.011339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.011343] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.011353] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.011357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.011360] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2419c30) 00:27:26.177 [2024-05-13 20:40:42.011367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.177 [2024-05-13 20:40:42.011378] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2481da0, cid 3, qid 0 00:27:26.177 [2024-05-13 20:40:42.011445] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:26.177 [2024-05-13 20:40:42.011452] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:26.177 [2024-05-13 20:40:42.011455] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:26.177 [2024-05-13 20:40:42.011459] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2481da0) on tqpair=0x2419c30 00:27:26.177 [2024-05-13 20:40:42.011467] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:26.177 0 Kelvin (-273 Celsius) 00:27:26.177 Available Spare: 0% 00:27:26.177 Available Spare Threshold: 0% 00:27:26.177 Life Percentage Used: 0% 00:27:26.177 Data Units Read: 0 00:27:26.177 Data Units Written: 0 00:27:26.177 Host Read Commands: 0 00:27:26.177 Host Write Commands: 0 00:27:26.177 Controller Busy Time: 0 minutes 00:27:26.177 Power Cycles: 0 00:27:26.177 Power On Hours: 0 hours 00:27:26.177 Unsafe Shutdowns: 0 00:27:26.177 Unrecoverable Media Errors: 0 00:27:26.177 Lifetime Error Log Entries: 0 00:27:26.177 Warning Temperature Time: 0 minutes 00:27:26.177 Critical Temperature Time: 0 minutes 00:27:26.177 00:27:26.177 Number of Queues 00:27:26.177 ================ 00:27:26.177 Number of I/O Submission Queues: 127 00:27:26.177 Number of I/O Completion Queues: 127 00:27:26.177 00:27:26.177 Active Namespaces 00:27:26.177 ================= 00:27:26.177 Namespace ID:1 00:27:26.177 Error Recovery Timeout: Unlimited 00:27:26.177 Command Set Identifier: NVM (00h) 00:27:26.177 Deallocate: Supported 00:27:26.177 Deallocated/Unwritten Error: Not Supported 00:27:26.177 Deallocated Read Value: Unknown 00:27:26.177 Deallocate in Write Zeroes: Not Supported 00:27:26.177 Deallocated Guard Field: 0xFFFF 00:27:26.177 Flush: Supported 00:27:26.177 Reservation: Supported 00:27:26.177 Namespace Sharing Capabilities: Multiple Controllers 00:27:26.177 Size (in LBAs): 131072 (0GiB) 00:27:26.177 Capacity (in LBAs): 131072 (0GiB) 00:27:26.177 Utilization (in LBAs): 131072 (0GiB) 00:27:26.177 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:26.177 EUI64: ABCDEF0123456789 00:27:26.177 UUID: 68808220-8821-43f3-a07b-69a4d23e8aa8 00:27:26.177 Thin Provisioning: Not Supported 00:27:26.177 Per-NS Atomic Units: Yes 00:27:26.177 Atomic Boundary Size (Normal): 0 00:27:26.177 Atomic Boundary Size (PFail): 0 00:27:26.177 Atomic Boundary Offset: 0 00:27:26.177 Maximum Single Source Range Length: 65535 00:27:26.177 Maximum Copy Length: 65535 00:27:26.177 Maximum Source Range Count: 1 00:27:26.177 NGUID/EUI64 Never Reused: No 00:27:26.177 Namespace Write Protected: No 00:27:26.177 Number of LBA Formats: 1 00:27:26.177 Current LBA Format: LBA Format #00 00:27:26.177 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:26.177 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.177 rmmod nvme_tcp 00:27:26.177 rmmod nvme_fabrics 00:27:26.177 rmmod nvme_keyring 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3187153 ']' 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3187153 00:27:26.177 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3187153 ']' 00:27:26.178 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3187153 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3187153 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3187153' 00:27:26.439 killing process with pid 3187153 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3187153 00:27:26.439 [2024-05-13 20:40:42.160788] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3187153 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.439 20:40:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.984 20:40:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.984 00:27:28.984 real 0m11.765s 00:27:28.984 user 0m7.894s 00:27:28.984 sys 0m6.259s 00:27:28.984 20:40:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:28.984 20:40:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:28.984 ************************************ 00:27:28.984 END TEST nvmf_identify 00:27:28.984 ************************************ 00:27:28.984 20:40:44 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:28.984 20:40:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:28.984 20:40:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:28.984 20:40:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.984 ************************************ 00:27:28.984 START TEST nvmf_perf 00:27:28.984 ************************************ 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:28.984 * Looking for test storage... 00:27:28.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.984 20:40:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:28.985 20:40:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:37.128 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:37.128 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:37.128 Found net devices under 0000:31:00.0: cvl_0_0 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.128 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:37.129 Found net devices under 0000:31:00.1: cvl_0_1 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.129 20:40:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:27:37.129 00:27:37.129 --- 10.0.0.2 ping statistics --- 00:27:37.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.129 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:27:37.129 00:27:37.129 --- 10.0.0.1 ping statistics --- 00:27:37.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.129 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3191978 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3191978 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3191978 ']' 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:37.129 20:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:37.129 [2024-05-13 20:40:52.248450] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:37.129 [2024-05-13 20:40:52.248505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.129 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.129 [2024-05-13 20:40:52.323252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.129 [2024-05-13 20:40:52.392620] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.129 [2024-05-13 20:40:52.392659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.129 [2024-05-13 20:40:52.392667] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.129 [2024-05-13 20:40:52.392674] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.130 [2024-05-13 20:40:52.392679] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.130 [2024-05-13 20:40:52.392820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.130 [2024-05-13 20:40:52.392946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.130 [2024-05-13 20:40:52.393071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.130 [2024-05-13 20:40:52.393074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:37.130 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:37.703 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:37.703 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:37.963 20:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:38.224 [2024-05-13 20:40:54.029579] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.224 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.486 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:38.486 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:38.486 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:38.486 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:38.747 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.009 [2024-05-13 20:40:54.707830] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:39.009 [2024-05-13 20:40:54.708068] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.009 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:39.009 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:27:39.009 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:39.009 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:39.009 20:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:40.394 Initializing NVMe Controllers 00:27:40.394 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:27:40.394 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:27:40.394 Initialization complete. Launching workers. 00:27:40.394 ======================================================== 00:27:40.394 Latency(us) 00:27:40.394 Device Information : IOPS MiB/s Average min max 00:27:40.394 PCIE (0000:65:00.0) NSID 1 from core 0: 80664.72 315.10 396.16 13.28 5020.62 00:27:40.394 ======================================================== 00:27:40.394 Total : 80664.72 315.10 396.16 13.28 5020.62 00:27:40.394 00:27:40.394 20:40:56 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:40.394 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.780 Initializing NVMe Controllers 00:27:41.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:41.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:41.780 Initialization complete. Launching workers. 00:27:41.780 ======================================================== 00:27:41.780 Latency(us) 00:27:41.780 Device Information : IOPS MiB/s Average min max 00:27:41.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 10742.74 292.94 45890.96 00:27:41.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19529.67 4997.49 50879.49 00:27:41.780 ======================================================== 00:27:41.780 Total : 146.00 0.57 13872.33 292.94 50879.49 00:27:41.780 00:27:41.780 20:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:41.780 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.170 Initializing NVMe Controllers 00:27:43.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:43.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:43.170 Initialization complete. Launching workers. 00:27:43.170 ======================================================== 00:27:43.170 Latency(us) 00:27:43.170 Device Information : IOPS MiB/s Average min max 00:27:43.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8923.00 34.86 3598.84 446.40 10002.71 00:27:43.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3684.00 14.39 8724.35 6419.59 18897.21 00:27:43.170 ======================================================== 00:27:43.170 Total : 12607.00 49.25 5096.61 446.40 18897.21 00:27:43.170 00:27:43.170 20:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:43.170 20:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:43.170 20:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.170 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.723 Initializing NVMe Controllers 00:27:45.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.723 Controller IO queue size 128, less than required. 00:27:45.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.723 Controller IO queue size 128, less than required. 00:27:45.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:45.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:45.723 Initialization complete. Launching workers. 00:27:45.723 ======================================================== 00:27:45.723 Latency(us) 00:27:45.724 Device Information : IOPS MiB/s Average min max 00:27:45.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1106.48 276.62 118125.70 67626.76 191299.45 00:27:45.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 591.99 148.00 231570.74 67346.98 341242.67 00:27:45.724 ======================================================== 00:27:45.724 Total : 1698.48 424.62 157666.16 67346.98 341242.67 00:27:45.724 00:27:45.724 20:41:01 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:45.724 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.988 No valid NVMe controllers or AIO or URING devices found 00:27:45.988 Initializing NVMe Controllers 00:27:45.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.988 Controller IO queue size 128, less than required. 00:27:45.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.988 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:45.988 Controller IO queue size 128, less than required. 00:27:45.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.988 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:45.988 WARNING: Some requested NVMe devices were skipped 00:27:45.988 20:41:01 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:45.988 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.584 Initializing NVMe Controllers 00:27:48.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.584 Controller IO queue size 128, less than required. 00:27:48.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:48.584 Controller IO queue size 128, less than required. 00:27:48.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:48.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:48.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:48.584 Initialization complete. Launching workers. 00:27:48.584 00:27:48.584 ==================== 00:27:48.584 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:48.584 TCP transport: 00:27:48.584 polls: 36624 00:27:48.584 idle_polls: 14416 00:27:48.584 sock_completions: 22208 00:27:48.584 nvme_completions: 4701 00:27:48.584 submitted_requests: 7018 00:27:48.584 queued_requests: 1 00:27:48.584 00:27:48.584 ==================== 00:27:48.584 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:48.584 TCP transport: 00:27:48.584 polls: 33360 00:27:48.584 idle_polls: 10923 00:27:48.584 sock_completions: 22437 00:27:48.584 nvme_completions: 4553 00:27:48.584 submitted_requests: 6860 00:27:48.584 queued_requests: 1 00:27:48.584 ======================================================== 00:27:48.584 Latency(us) 00:27:48.584 Device Information : IOPS MiB/s Average min max 00:27:48.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1174.19 293.55 111952.30 53587.29 162608.01 00:27:48.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1137.21 284.30 113794.42 57243.22 200646.49 00:27:48.584 ======================================================== 00:27:48.584 Total : 2311.40 577.85 112858.63 53587.29 200646.49 00:27:48.584 00:27:48.584 20:41:04 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:48.584 20:41:04 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.584 20:41:04 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:48.584 20:41:04 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:27:48.584 20:41:04 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=12b02681-904c-47f1-8df5-b14b08c6046a 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 12b02681-904c-47f1-8df5-b14b08c6046a 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=12b02681-904c-47f1-8df5-b14b08c6046a 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:27:49.526 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:49.788 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:27:49.788 { 00:27:49.788 "uuid": "12b02681-904c-47f1-8df5-b14b08c6046a", 00:27:49.788 "name": "lvs_0", 00:27:49.788 "base_bdev": "Nvme0n1", 00:27:49.788 "total_data_clusters": 457407, 00:27:49.788 "free_clusters": 457407, 00:27:49.788 "block_size": 512, 00:27:49.788 "cluster_size": 4194304 00:27:49.788 } 00:27:49.788 ]' 00:27:49.788 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="12b02681-904c-47f1-8df5-b14b08c6046a") .free_clusters' 00:27:49.788 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=457407 00:27:49.788 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="12b02681-904c-47f1-8df5-b14b08c6046a") .cluster_size' 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=1829628 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 1829628 00:27:50.048 1829628 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12b02681-904c-47f1-8df5-b14b08c6046a lbd_0 20480 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9b0afd17-d9ff-4322-b3b5-5179b843f5e0 00:27:50.048 20:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9b0afd17-d9ff-4322-b3b5-5179b843f5e0 lvs_n_0 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=3b507046-76ff-409b-abfc-1affdf9131bc 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 3b507046-76ff-409b-abfc-1affdf9131bc 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=3b507046-76ff-409b-abfc-1affdf9131bc 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:51.962 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:27:51.962 { 00:27:51.963 "uuid": "12b02681-904c-47f1-8df5-b14b08c6046a", 00:27:51.963 "name": "lvs_0", 00:27:51.963 "base_bdev": "Nvme0n1", 00:27:51.963 "total_data_clusters": 457407, 00:27:51.963 "free_clusters": 452287, 00:27:51.963 "block_size": 512, 00:27:51.963 "cluster_size": 4194304 00:27:51.963 }, 00:27:51.963 { 00:27:51.963 "uuid": "3b507046-76ff-409b-abfc-1affdf9131bc", 00:27:51.963 "name": "lvs_n_0", 00:27:51.963 "base_bdev": "9b0afd17-d9ff-4322-b3b5-5179b843f5e0", 00:27:51.963 "total_data_clusters": 5114, 00:27:51.963 "free_clusters": 5114, 00:27:51.963 "block_size": 512, 00:27:51.963 "cluster_size": 4194304 00:27:51.963 } 00:27:51.963 ]' 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="3b507046-76ff-409b-abfc-1affdf9131bc") .free_clusters' 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="3b507046-76ff-409b-abfc-1affdf9131bc") .cluster_size' 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:27:51.963 20456 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:51.963 20:41:07 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b507046-76ff-409b-abfc-1affdf9131bc lbd_nest_0 20456 00:27:52.224 20:41:07 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=83e94421-3281-491f-b6c5-0f1c81caa79c 00:27:52.224 20:41:07 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.224 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:52.224 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 83e94421-3281-491f-b6c5-0f1c81caa79c 00:27:52.484 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.745 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:52.745 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:52.745 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:52.745 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:52.745 20:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:52.745 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.972 Initializing NVMe Controllers 00:28:04.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:04.972 Initialization complete. Launching workers. 00:28:04.972 ======================================================== 00:28:04.972 Latency(us) 00:28:04.972 Device Information : IOPS MiB/s Average min max 00:28:04.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.10 0.02 20395.84 169.23 49547.75 00:28:04.972 ======================================================== 00:28:04.972 Total : 49.10 0.02 20395.84 169.23 49547.75 00:28:04.972 00:28:04.972 20:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:04.972 20:41:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:04.973 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.985 Initializing NVMe Controllers 00:28:14.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.985 Initialization complete. Launching workers. 00:28:14.985 ======================================================== 00:28:14.985 Latency(us) 00:28:14.985 Device Information : IOPS MiB/s Average min max 00:28:14.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.30 8.66 14453.44 5985.59 51879.25 00:28:14.985 ======================================================== 00:28:14.985 Total : 69.30 8.66 14453.44 5985.59 51879.25 00:28:14.985 00:28:14.985 20:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:14.985 20:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:14.985 20:41:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:14.985 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.990 Initializing NVMe Controllers 00:28:24.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.990 Initialization complete. Launching workers. 00:28:24.990 ======================================================== 00:28:24.990 Latency(us) 00:28:24.990 Device Information : IOPS MiB/s Average min max 00:28:24.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7450.98 3.64 4295.57 292.88 11045.95 00:28:24.990 ======================================================== 00:28:24.990 Total : 7450.98 3.64 4295.57 292.88 11045.95 00:28:24.990 00:28:24.990 20:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:24.990 20:41:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.990 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.996 Initializing NVMe Controllers 00:28:34.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.996 Initialization complete. Launching workers. 00:28:34.996 ======================================================== 00:28:34.996 Latency(us) 00:28:34.996 Device Information : IOPS MiB/s Average min max 00:28:34.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2328.69 291.09 13741.24 826.30 31664.03 00:28:34.996 ======================================================== 00:28:34.996 Total : 2328.69 291.09 13741.24 826.30 31664.03 00:28:34.996 00:28:34.996 20:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:34.996 20:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:34.996 20:41:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.996 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.000 Initializing NVMe Controllers 00:28:45.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.000 Controller IO queue size 128, less than required. 00:28:45.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.000 Initialization complete. Launching workers. 00:28:45.000 ======================================================== 00:28:45.000 Latency(us) 00:28:45.000 Device Information : IOPS MiB/s Average min max 00:28:45.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15886.10 7.76 8061.77 1576.09 19865.88 00:28:45.000 ======================================================== 00:28:45.000 Total : 15886.10 7.76 8061.77 1576.09 19865.88 00:28:45.000 00:28:45.000 20:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:45.001 20:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.001 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.089 Initializing NVMe Controllers 00:28:55.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.090 Controller IO queue size 128, less than required. 00:28:55.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.090 Initialization complete. Launching workers. 00:28:55.090 ======================================================== 00:28:55.090 Latency(us) 00:28:55.090 Device Information : IOPS MiB/s Average min max 00:28:55.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1150.20 143.77 111602.15 20512.73 250334.71 00:28:55.090 ======================================================== 00:28:55.090 Total : 1150.20 143.77 111602.15 20512.73 250334.71 00:28:55.090 00:28:55.090 20:42:10 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.090 20:42:10 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 83e94421-3281-491f-b6c5-0f1c81caa79c 00:28:56.473 20:42:12 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:56.734 20:42:12 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b0afd17-d9ff-4322-b3b5-5179b843f5e0 00:28:57.084 20:42:12 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:57.084 20:42:12 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.085 rmmod nvme_tcp 00:28:57.085 rmmod nvme_fabrics 00:28:57.085 rmmod nvme_keyring 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3191978 ']' 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3191978 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3191978 ']' 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3191978 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:57.085 20:42:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3191978 00:28:57.085 20:42:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:57.085 20:42:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:57.085 20:42:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3191978' 00:28:57.085 killing process with pid 3191978 00:28:57.085 20:42:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3191978 00:28:57.085 [2024-05-13 20:42:13.017447] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:57.085 20:42:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3191978 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.642 20:42:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.559 20:42:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:01.559 00:29:01.559 real 1m32.596s 00:29:01.559 user 5m27.165s 00:29:01.559 sys 0m14.078s 00:29:01.559 20:42:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:01.559 20:42:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:01.559 ************************************ 00:29:01.559 END TEST nvmf_perf 00:29:01.559 ************************************ 00:29:01.559 20:42:17 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:01.559 20:42:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:01.559 20:42:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:01.559 20:42:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.559 ************************************ 00:29:01.559 START TEST nvmf_fio_host 00:29:01.559 ************************************ 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:01.559 * Looking for test storage... 00:29:01.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.559 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:01.560 20:42:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:09.705 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:09.705 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:09.705 Found net devices under 0000:31:00.0: cvl_0_0 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.705 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:09.706 Found net devices under 0000:31:00.1: cvl_0_1 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:09.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:29:09.706 00:29:09.706 --- 10.0.0.2 ping statistics --- 00:29:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.706 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:29:09.706 00:29:09.706 --- 10.0.0.1 ping statistics --- 00:29:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.706 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.706 20:42:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3212995 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3212995 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3212995 ']' 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:09.706 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.706 [2024-05-13 20:42:25.066251] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:29:09.706 [2024-05-13 20:42:25.066310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.706 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.706 [2024-05-13 20:42:25.139009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.706 [2024-05-13 20:42:25.204432] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.706 [2024-05-13 20:42:25.204470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.706 [2024-05-13 20:42:25.204478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.706 [2024-05-13 20:42:25.204484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.706 [2024-05-13 20:42:25.204490] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.706 [2024-05-13 20:42:25.204630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.706 [2024-05-13 20:42:25.204758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.706 [2024-05-13 20:42:25.204886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.706 [2024-05-13 20:42:25.204889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.967 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 [2024-05-13 20:42:25.838771] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 Malloc1 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.968 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.229 [2024-05-13 20:42:25.934035] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:10.229 [2024-05-13 20:42:25.934238] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:10.229 20:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:10.229 20:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:10.229 20:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:10.229 20:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:10.229 20:42:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:10.489 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:10.489 fio-3.35 00:29:10.489 Starting 1 thread 00:29:10.489 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.035 00:29:13.035 test: (groupid=0, jobs=1): err= 0: pid=3213380: Mon May 13 20:42:28 2024 00:29:13.035 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(78.7MiB/2006msec) 00:29:13.035 slat (usec): min=2, max=224, avg= 2.18, stdev= 2.11 00:29:13.035 clat (usec): min=3315, max=12675, avg=7018.45, stdev=508.46 00:29:13.035 lat (usec): min=3344, max=12678, avg=7020.64, stdev=508.27 00:29:13.035 clat percentiles (usec): 00:29:13.035 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6652], 00:29:13.035 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7111], 00:29:13.035 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7767], 00:29:13.035 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[10421], 99.95th=[11207], 00:29:13.035 | 99.99th=[11994] 00:29:13.035 bw ( KiB/s): min=39120, max=40704, per=99.93%, avg=40158.00, stdev=716.57, samples=4 00:29:13.035 iops : min= 9780, max=10176, avg=10039.50, stdev=179.14, samples=4 00:29:13.035 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.8MiB/2006msec); 0 zone resets 00:29:13.035 slat (usec): min=2, max=189, avg= 2.28, stdev= 1.50 00:29:13.035 clat (usec): min=2274, max=11174, avg=5624.68, stdev=425.84 00:29:13.035 lat (usec): min=2291, max=11176, avg=5626.96, stdev=425.68 00:29:13.035 clat percentiles (usec): 00:29:13.035 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:29:13.035 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5735], 00:29:13.035 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6259], 00:29:13.035 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 8455], 99.95th=[ 9634], 00:29:13.035 | 99.99th=[11076] 00:29:13.035 bw ( KiB/s): min=39568, max=40576, per=100.00%, avg=40228.00, stdev=461.21, samples=4 00:29:13.035 iops : min= 9892, max=10144, avg=10057.00, stdev=115.30, samples=4 00:29:13.035 lat (msec) : 4=0.13%, 10=99.78%, 20=0.10% 00:29:13.035 cpu : usr=70.42%, sys=26.63%, ctx=30, majf=0, minf=6 00:29:13.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:13.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:13.035 issued rwts: total=20153,20167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:13.035 00:29:13.035 Run status group 0 (all jobs): 00:29:13.035 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=78.7MiB (82.5MB), run=2006-2006msec 00:29:13.035 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.8MiB (82.6MB), run=2006-2006msec 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:13.035 20:42:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:13.296 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:13.296 fio-3.35 00:29:13.296 Starting 1 thread 00:29:13.296 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.844 00:29:15.844 test: (groupid=0, jobs=1): err= 0: pid=3214024: Mon May 13 20:42:31 2024 00:29:15.844 read: IOPS=9317, BW=146MiB/s (153MB/s)(292MiB/2006msec) 00:29:15.844 slat (usec): min=3, max=108, avg= 3.67, stdev= 1.61 00:29:15.844 clat (usec): min=2336, max=20681, avg=8463.74, stdev=2191.77 00:29:15.844 lat (usec): min=2339, max=20684, avg=8467.41, stdev=2191.98 00:29:15.844 clat percentiles (usec): 00:29:15.844 | 1.00th=[ 4424], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6521], 00:29:15.844 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:29:15.844 | 70.00th=[ 9503], 80.00th=[10421], 90.00th=[11600], 95.00th=[12125], 00:29:15.844 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15795], 99.95th=[16188], 00:29:15.844 | 99.99th=[16581] 00:29:15.844 bw ( KiB/s): min=65216, max=90080, per=49.24%, avg=73408.00, stdev=11270.63, samples=4 00:29:15.844 iops : min= 4076, max= 5630, avg=4588.00, stdev=704.41, samples=4 00:29:15.844 write: IOPS=5563, BW=86.9MiB/s (91.2MB/s)(150MiB/1721msec); 0 zone resets 00:29:15.844 slat (usec): min=40, max=442, avg=41.26, stdev= 8.97 00:29:15.844 clat (usec): min=1671, max=16569, avg=9391.51, stdev=1506.72 00:29:15.844 lat (usec): min=1711, max=16708, avg=9432.77, stdev=1509.13 00:29:15.844 clat percentiles (usec): 00:29:15.844 | 1.00th=[ 6521], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:29:15.844 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:29:15.844 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[12125], 00:29:15.844 | 99.00th=[13829], 99.50th=[14353], 99.90th=[16319], 99.95th=[16450], 00:29:15.844 | 99.99th=[16581] 00:29:15.844 bw ( KiB/s): min=68192, max=92576, per=85.66%, avg=76256.00, stdev=11037.20, samples=4 00:29:15.844 iops : min= 4262, max= 5786, avg=4766.00, stdev=689.83, samples=4 00:29:15.844 lat (msec) : 2=0.01%, 4=0.33%, 10=73.92%, 20=25.74%, 50=0.01% 00:29:15.844 cpu : usr=84.49%, sys=13.22%, ctx=13, majf=0, minf=23 00:29:15.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:15.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:15.844 issued rwts: total=18690,9575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:15.844 00:29:15.844 Run status group 0 (all jobs): 00:29:15.844 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=292MiB (306MB), run=2006-2006msec 00:29:15.844 WRITE: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=150MiB (157MB), run=1721-1721msec 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.844 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.106 Nvme0n1 00:29:16.106 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.106 20:42:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:16.106 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.106 20:42:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=ce076445-b384-4349-b00e-58ad08ac01e0 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb ce076445-b384-4349-b00e-58ad08ac01e0 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=ce076445-b384-4349-b00e-58ad08ac01e0 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:16.366 { 00:29:16.366 "uuid": "ce076445-b384-4349-b00e-58ad08ac01e0", 00:29:16.366 "name": "lvs_0", 00:29:16.366 "base_bdev": "Nvme0n1", 00:29:16.366 "total_data_clusters": 1787, 00:29:16.366 "free_clusters": 1787, 00:29:16.366 "block_size": 512, 00:29:16.366 "cluster_size": 1073741824 00:29:16.366 } 00:29:16.366 ]' 00:29:16.366 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="ce076445-b384-4349-b00e-58ad08ac01e0") .free_clusters' 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1787 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="ce076445-b384-4349-b00e-58ad08ac01e0") .cluster_size' 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1829888 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1829888 00:29:16.628 1829888 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 ecfeb098-af6e-4646-bb10-545fdbe76c23 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:16.628 20:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:16.889 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:16.889 fio-3.35 00:29:16.889 Starting 1 thread 00:29:16.889 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.431 [2024-05-13 20:42:35.233654] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2053b70 is same with the state(5) to be set 00:29:19.431 [2024-05-13 20:42:35.233700] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2053b70 is same with the state(5) to be set 00:29:19.431 00:29:19.431 test: (groupid=0, jobs=1): err= 0: pid=3214891: Mon May 13 20:42:35 2024 00:29:19.431 read: IOPS=7577, BW=29.6MiB/s (31.0MB/s)(59.4MiB/2007msec) 00:29:19.431 slat (usec): min=2, max=110, avg= 2.25, stdev= 1.20 00:29:19.431 clat (usec): min=3031, max=15682, avg=9363.64, stdev=727.28 00:29:19.431 lat (usec): min=3048, max=15685, avg=9365.89, stdev=727.19 00:29:19.431 clat percentiles (usec): 00:29:19.431 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:29:19.431 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:29:19.431 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:29:19.431 | 99.00th=[10945], 99.50th=[11076], 99.90th=[12518], 99.95th=[14746], 00:29:19.431 | 99.99th=[14877] 00:29:19.431 bw ( KiB/s): min=29216, max=30952, per=99.94%, avg=30292.00, stdev=785.37, samples=4 00:29:19.431 iops : min= 7304, max= 7738, avg=7573.00, stdev=196.34, samples=4 00:29:19.431 write: IOPS=7570, BW=29.6MiB/s (31.0MB/s)(59.3MiB/2007msec); 0 zone resets 00:29:19.431 slat (nsec): min=2144, max=100098, avg=2345.32, stdev=856.18 00:29:19.431 clat (usec): min=1494, max=14685, avg=7447.77, stdev=647.34 00:29:19.431 lat (usec): min=1502, max=14688, avg=7450.12, stdev=647.31 00:29:19.431 clat percentiles (usec): 00:29:19.431 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6980], 00:29:19.431 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:29:19.431 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:29:19.431 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[12125], 99.95th=[13435], 00:29:19.431 | 99.99th=[13566] 00:29:19.431 bw ( KiB/s): min=29976, max=30504, per=99.92%, avg=30256.00, stdev=240.27, samples=4 00:29:19.431 iops : min= 7494, max= 7626, avg=7564.00, stdev=60.07, samples=4 00:29:19.431 lat (msec) : 2=0.01%, 4=0.10%, 10=91.02%, 20=8.88% 00:29:19.431 cpu : usr=73.18%, sys=25.27%, ctx=74, majf=0, minf=5 00:29:19.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:19.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.431 issued rwts: total=15208,15193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.431 00:29:19.431 Run status group 0 (all jobs): 00:29:19.431 READ: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=59.4MiB (62.3MB), run=2007-2007msec 00:29:19.431 WRITE: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=59.3MiB (62.2MB), run=2007-2007msec 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.431 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=eab69c92-ef9d-4919-94bf-b04add6988c0 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb eab69c92-ef9d-4919-94bf-b04add6988c0 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=eab69c92-ef9d-4919-94bf-b04add6988c0 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:20.003 { 00:29:20.003 "uuid": "ce076445-b384-4349-b00e-58ad08ac01e0", 00:29:20.003 "name": "lvs_0", 00:29:20.003 "base_bdev": "Nvme0n1", 00:29:20.003 "total_data_clusters": 1787, 00:29:20.003 "free_clusters": 0, 00:29:20.003 "block_size": 512, 00:29:20.003 "cluster_size": 1073741824 00:29:20.003 }, 00:29:20.003 { 00:29:20.003 "uuid": "eab69c92-ef9d-4919-94bf-b04add6988c0", 00:29:20.003 "name": "lvs_n_0", 00:29:20.003 "base_bdev": "ecfeb098-af6e-4646-bb10-545fdbe76c23", 00:29:20.003 "total_data_clusters": 457025, 00:29:20.003 "free_clusters": 457025, 00:29:20.003 "block_size": 512, 00:29:20.003 "cluster_size": 4194304 00:29:20.003 } 00:29:20.003 ]' 00:29:20.003 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="eab69c92-ef9d-4919-94bf-b04add6988c0") .free_clusters' 00:29:20.264 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=457025 00:29:20.264 20:42:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="eab69c92-ef9d-4919-94bf-b04add6988c0") .cluster_size' 00:29:20.264 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:20.264 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1828100 00:29:20.264 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1828100 00:29:20.264 1828100 00:29:20.264 20:42:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:29:20.264 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.264 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.208 b272e49a-cdd0-4a89-a411-0b5565b54b06 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:21.208 20:42:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:21.468 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:21.468 fio-3.35 00:29:21.468 Starting 1 thread 00:29:21.468 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.012 00:29:24.012 test: (groupid=0, jobs=1): err= 0: pid=3216046: Mon May 13 20:42:39 2024 00:29:24.012 read: IOPS=9757, BW=38.1MiB/s (40.0MB/s)(76.5MiB/2006msec) 00:29:24.012 slat (usec): min=2, max=123, avg= 2.25, stdev= 1.20 00:29:24.012 clat (usec): min=2030, max=12087, avg=7249.25, stdev=559.22 00:29:24.012 lat (usec): min=2048, max=12089, avg=7251.49, stdev=559.16 00:29:24.012 clat percentiles (usec): 00:29:24.012 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:29:24.012 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:29:24.012 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:29:24.012 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10552], 99.95th=[11469], 00:29:24.012 | 99.99th=[11994] 00:29:24.012 bw ( KiB/s): min=37984, max=39648, per=99.93%, avg=39002.00, stdev=727.47, samples=4 00:29:24.012 iops : min= 9496, max= 9912, avg=9750.50, stdev=181.87, samples=4 00:29:24.012 write: IOPS=9767, BW=38.2MiB/s (40.0MB/s)(76.5MiB/2006msec); 0 zone resets 00:29:24.012 slat (nsec): min=2150, max=110015, avg=2350.92, stdev=825.38 00:29:24.012 clat (usec): min=1309, max=10536, avg=5781.36, stdev=481.06 00:29:24.012 lat (usec): min=1317, max=10538, avg=5783.71, stdev=481.03 00:29:24.012 clat percentiles (usec): 00:29:24.012 | 1.00th=[ 4621], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:29:24.012 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:29:24.012 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:29:24.012 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 9241], 99.95th=[ 9896], 00:29:24.012 | 99.99th=[10552] 00:29:24.012 bw ( KiB/s): min=38552, max=39616, per=100.00%, avg=39076.00, stdev=435.47, samples=4 00:29:24.012 iops : min= 9638, max= 9904, avg=9769.00, stdev=108.87, samples=4 00:29:24.012 lat (msec) : 2=0.01%, 4=0.13%, 10=99.78%, 20=0.09% 00:29:24.012 cpu : usr=68.98%, sys=28.18%, ctx=55, majf=0, minf=5 00:29:24.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:24.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:24.012 issued rwts: total=19573,19593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:24.012 00:29:24.012 Run status group 0 (all jobs): 00:29:24.012 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2006-2006msec 00:29:24.012 WRITE: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.3MB), run=2006-2006msec 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.012 20:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.926 20:42:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.187 20:42:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.187 20:42:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:29:26.187 20:42:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.187 20:42:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.448 20:42:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.448 20:42:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:29:26.448 20:42:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.448 20:42:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.361 20:42:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.361 20:42:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.362 20:42:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.362 rmmod nvme_tcp 00:29:28.362 rmmod nvme_fabrics 00:29:28.362 rmmod nvme_keyring 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3212995 ']' 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3212995 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3212995 ']' 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3212995 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3212995 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3212995' 00:29:28.362 killing process with pid 3212995 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3212995 00:29:28.362 [2024-05-13 20:42:44.121043] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3212995 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.362 20:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.905 20:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:30.905 00:29:30.905 real 0m29.175s 00:29:30.905 user 2m15.204s 00:29:30.905 sys 0m9.557s 00:29:30.905 20:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.905 20:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.905 ************************************ 00:29:30.905 END TEST nvmf_fio_host 00:29:30.905 ************************************ 00:29:30.905 20:42:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:30.905 20:42:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:30.905 20:42:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.905 20:42:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.905 ************************************ 00:29:30.905 START TEST nvmf_failover 00:29:30.905 ************************************ 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:30.905 * Looking for test storage... 00:29:30.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:30.905 20:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:30.906 20:42:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:39.046 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:39.047 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:39.047 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:39.047 Found net devices under 0000:31:00.0: cvl_0_0 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:39.047 Found net devices under 0000:31:00.1: cvl_0_1 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.047 20:42:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:39.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:29:39.047 00:29:39.047 --- 10.0.0.2 ping statistics --- 00:29:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.047 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:29:39.047 00:29:39.047 --- 10.0.0.1 ping statistics --- 00:29:39.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.047 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3221724 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3221724 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3221724 ']' 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:39.047 20:42:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:39.047 [2024-05-13 20:42:54.363303] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:29:39.047 [2024-05-13 20:42:54.363403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.047 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.047 [2024-05-13 20:42:54.459759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:39.047 [2024-05-13 20:42:54.554665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.047 [2024-05-13 20:42:54.554729] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.047 [2024-05-13 20:42:54.554738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.047 [2024-05-13 20:42:54.554745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.047 [2024-05-13 20:42:54.554751] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.047 [2024-05-13 20:42:54.554890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.048 [2024-05-13 20:42:54.555024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.048 [2024-05-13 20:42:54.555026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.309 20:42:55 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:39.571 [2024-05-13 20:42:55.296419] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.571 20:42:55 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:39.571 Malloc0 00:29:39.571 20:42:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.832 20:42:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:40.092 20:42:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.092 [2024-05-13 20:42:55.964537] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:40.092 [2024-05-13 20:42:55.964774] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.092 20:42:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:40.353 [2024-05-13 20:42:56.133157] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:40.353 20:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:40.614 [2024-05-13 20:42:56.297676] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3222147 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3222147 /var/tmp/bdevperf.sock 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3222147 ']' 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:40.614 20:42:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.555 20:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:41.555 20:42:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:29:41.555 20:42:57 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:41.555 NVMe0n1 00:29:41.816 20:42:57 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:41.816 00:29:41.816 20:42:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3222421 00:29:41.816 20:42:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:41.816 20:42:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:43.200 20:42:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.200 [2024-05-13 20:42:58.873513] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873551] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873558] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873562] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873567] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873572] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873576] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873580] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873585] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873589] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873593] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873598] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873607] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873611] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873616] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873620] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873624] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873628] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873633] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873637] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873641] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873646] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873650] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873655] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873663] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873668] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873672] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873676] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873681] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873685] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873689] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873694] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873698] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873703] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873707] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873711] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873715] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873720] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873724] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873730] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873734] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873738] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873742] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873747] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873751] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873755] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873760] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873764] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873768] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873772] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.200 [2024-05-13 20:42:58.873776] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 [2024-05-13 20:42:58.873780] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 [2024-05-13 20:42:58.873785] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 [2024-05-13 20:42:58.873789] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 [2024-05-13 20:42:58.873793] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 [2024-05-13 20:42:58.873798] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 [2024-05-13 20:42:58.873802] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf311b0 is same with the state(5) to be set 00:29:43.201 20:42:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:46.499 20:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:46.499 00:29:46.499 20:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:46.760 [2024-05-13 20:43:02.483512] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.760 [2024-05-13 20:43:02.483551] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.760 [2024-05-13 20:43:02.483557] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.760 [2024-05-13 20:43:02.483562] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.760 [2024-05-13 20:43:02.483566] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483576] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483581] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483585] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483590] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483594] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483598] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483603] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483607] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483611] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483616] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483620] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483624] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483629] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483633] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483637] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483641] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483646] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483650] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483655] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483663] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483667] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483672] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483676] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483680] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483684] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483689] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483693] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483698] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483703] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483707] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483712] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483717] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 [2024-05-13 20:43:02.483721] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32060 is same with the state(5) to be set 00:29:46.761 20:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:50.057 20:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.058 [2024-05-13 20:43:05.656828] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.058 20:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:51.000 20:43:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:51.000 [2024-05-13 20:43:06.830811] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830848] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830854] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830858] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830863] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830868] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830872] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830877] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830881] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830886] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830890] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830894] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830899] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830903] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830908] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830925] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830930] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830934] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830939] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830943] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830948] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830952] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830957] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830961] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830965] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830969] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830974] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830978] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830982] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830987] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830991] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.830995] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831000] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831004] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831008] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831017] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831021] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831026] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831030] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831035] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831039] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831044] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831050] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831055] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831059] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831064] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831069] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831073] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831078] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831082] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831086] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831091] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831096] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831102] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831109] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831117] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831125] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831132] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831139] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831143] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831148] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831152] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831157] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831161] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831166] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831172] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831177] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831182] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831186] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831192] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831197] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831201] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831206] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831210] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831214] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831220] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831225] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831229] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.000 [2024-05-13 20:43:06.831234] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831239] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831243] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831248] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831253] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831257] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831261] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831267] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831271] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831276] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831280] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 [2024-05-13 20:43:06.831285] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf32d50 is same with the state(5) to be set 00:29:51.001 20:43:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3222421 00:29:57.598 0 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3222147 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3222147 ']' 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3222147 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3222147 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3222147' 00:29:57.598 killing process with pid 3222147 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3222147 00:29:57.598 20:43:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3222147 00:29:57.598 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:57.598 [2024-05-13 20:42:56.372765] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:29:57.598 [2024-05-13 20:42:56.372824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222147 ] 00:29:57.598 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.598 [2024-05-13 20:42:56.438985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.598 [2024-05-13 20:42:56.503460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.598 Running I/O for 15 seconds... 00:29:57.598 [2024-05-13 20:42:58.874288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.598 [2024-05-13 20:42:58.874811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.598 [2024-05-13 20:42:58.874820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.874989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.874998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.599 [2024-05-13 20:42:58.875481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.599 [2024-05-13 20:42:58.875488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.600 [2024-05-13 20:42:58.875846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.600 [2024-05-13 20:42:58.875862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.600 [2024-05-13 20:42:58.875878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.600 [2024-05-13 20:42:58.875894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.600 [2024-05-13 20:42:58.875910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.600 [2024-05-13 20:42:58.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.600 [2024-05-13 20:42:58.875943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.875972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95360 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.875979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.875989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.875994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95368 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.876007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.876014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.876020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95376 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.876032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.876040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.876045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95384 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.876058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.876065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.876070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95392 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.876083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.876090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.876096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95400 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.876109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.876116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.876121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95408 len:8 PRP1 0x0 PRP2 0x0 00:29:57.600 [2024-05-13 20:42:58.876134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.600 [2024-05-13 20:42:58.876141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.600 [2024-05-13 20:42:58.876147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.600 [2024-05-13 20:42:58.876154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95416 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95424 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95432 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95456 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94624 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94632 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94640 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94656 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.876464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.876469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.876475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94664 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.876482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94672 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94680 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94688 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94696 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94704 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94712 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94720 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.601 [2024-05-13 20:42:58.887760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.601 [2024-05-13 20:42:58.887766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94728 len:8 PRP1 0x0 PRP2 0x0 00:29:57.601 [2024-05-13 20:42:58.887773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887810] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2530cb0 was disconnected and freed. reset controller. 00:29:57.601 [2024-05-13 20:42:58.887824] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:57.601 [2024-05-13 20:42:58.887851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.601 [2024-05-13 20:42:58.887864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.601 [2024-05-13 20:42:58.887888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.601 [2024-05-13 20:42:58.887903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.601 [2024-05-13 20:42:58.887919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.601 [2024-05-13 20:42:58.887928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.601 [2024-05-13 20:42:58.887954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511910 (9): Bad file descriptor 00:29:57.601 [2024-05-13 20:42:58.891550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.601 [2024-05-13 20:42:59.014120] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.602 [2024-05-13 20:43:02.483932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.483968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.483985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.483993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.602 [2024-05-13 20:43:02.484458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.602 [2024-05-13 20:43:02.484467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.484991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.484998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.603 [2024-05-13 20:43:02.485129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.603 [2024-05-13 20:43:02.485138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.604 [2024-05-13 20:43:02.485683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.604 [2024-05-13 20:43:02.485804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.604 [2024-05-13 20:43:02.485812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.485987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.485997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.486004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.486020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:02.486037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2532ed0 is same with the state(5) to be set 00:29:57.605 [2024-05-13 20:43:02.486053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.605 [2024-05-13 20:43:02.486059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.605 [2024-05-13 20:43:02.486066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58464 len:8 PRP1 0x0 PRP2 0x0 00:29:57.605 [2024-05-13 20:43:02.486073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486108] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2532ed0 was disconnected and freed. reset controller. 00:29:57.605 [2024-05-13 20:43:02.486117] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:57.605 [2024-05-13 20:43:02.486137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.605 [2024-05-13 20:43:02.486146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.605 [2024-05-13 20:43:02.486161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.605 [2024-05-13 20:43:02.486177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.605 [2024-05-13 20:43:02.486192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:02.486200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.605 [2024-05-13 20:43:02.489843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.605 [2024-05-13 20:43:02.489871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511910 (9): Bad file descriptor 00:29:57.605 [2024-05-13 20:43:02.525416] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.605 [2024-05-13 20:43:06.833494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.605 [2024-05-13 20:43:06.833833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.605 [2024-05-13 20:43:06.833842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.833988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.833994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.606 [2024-05-13 20:43:06.834174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.606 [2024-05-13 20:43:06.834385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.606 [2024-05-13 20:43:06.834394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.607 [2024-05-13 20:43:06.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.607 [2024-05-13 20:43:06.834688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.607 [2024-05-13 20:43:06.834704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.607 [2024-05-13 20:43:06.834720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.834950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.834959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.607 [2024-05-13 20:43:06.835268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.607 [2024-05-13 20:43:06.835275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.608 [2024-05-13 20:43:06.835455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64016 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64024 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64032 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64048 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64056 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64064 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64080 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64088 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64096 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64112 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64120 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.835903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.835910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.608 [2024-05-13 20:43:06.835918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.608 [2024-05-13 20:43:06.835926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.608 [2024-05-13 20:43:06.845792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:29:57.608 [2024-05-13 20:43:06.845820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.845834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.609 [2024-05-13 20:43:06.845841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.609 [2024-05-13 20:43:06.845847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 PRP1 0x0 PRP2 0x0 00:29:57.609 [2024-05-13 20:43:06.845854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.845862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.609 [2024-05-13 20:43:06.845868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.609 [2024-05-13 20:43:06.845874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64168 len:8 PRP1 0x0 PRP2 0x0 00:29:57.609 [2024-05-13 20:43:06.845881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.845888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.609 [2024-05-13 20:43:06.845893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.609 [2024-05-13 20:43:06.845899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:29:57.609 [2024-05-13 20:43:06.845906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.845913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.609 [2024-05-13 20:43:06.845919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.609 [2024-05-13 20:43:06.845929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64184 len:8 PRP1 0x0 PRP2 0x0 00:29:57.609 [2024-05-13 20:43:06.845936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.845944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.609 [2024-05-13 20:43:06.845949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.609 [2024-05-13 20:43:06.845955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:29:57.609 [2024-05-13 20:43:06.845962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.846001] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25403d0 was disconnected and freed. reset controller. 00:29:57.609 [2024-05-13 20:43:06.846010] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:57.609 [2024-05-13 20:43:06.846037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.609 [2024-05-13 20:43:06.846046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.846055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.609 [2024-05-13 20:43:06.846062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.846070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.609 [2024-05-13 20:43:06.846077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.846085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.609 [2024-05-13 20:43:06.846092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.609 [2024-05-13 20:43:06.846099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.609 [2024-05-13 20:43:06.846136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511910 (9): Bad file descriptor 00:29:57.609 [2024-05-13 20:43:06.849723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.609 [2024-05-13 20:43:06.883281] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.609 00:29:57.609 Latency(us) 00:29:57.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.609 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:57.609 Verification LBA range: start 0x0 length 0x4000 00:29:57.609 NVMe0n1 : 15.01 10391.58 40.59 441.70 0.00 11787.15 761.17 20534.61 00:29:57.609 =================================================================================================================== 00:29:57.609 Total : 10391.58 40.59 441.70 0.00 11787.15 761.17 20534.61 00:29:57.609 Received shutdown signal, test time was about 15.000000 seconds 00:29:57.609 00:29:57.609 Latency(us) 00:29:57.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.609 =================================================================================================================== 00:29:57.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3225428 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3225428 /var/tmp/bdevperf.sock 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3225428 ']' 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:57.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:57.609 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:58.181 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:58.181 20:43:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:29:58.181 20:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:58.181 [2024-05-13 20:43:14.034432] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:58.181 20:43:14 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:58.441 [2024-05-13 20:43:14.206828] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:58.441 20:43:14 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:58.701 NVMe0n1 00:29:58.701 20:43:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:58.962 00:29:58.962 20:43:14 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.222 00:29:59.222 20:43:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:59.222 20:43:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:59.483 20:43:15 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.743 20:43:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:03.169 20:43:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:03.169 20:43:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:03.169 20:43:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.169 20:43:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3226452 00:30:03.169 20:43:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3226452 00:30:04.109 0 00:30:04.109 20:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.109 [2024-05-13 20:43:13.116096] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:30:04.109 [2024-05-13 20:43:13.116155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225428 ] 00:30:04.109 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.109 [2024-05-13 20:43:13.181685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.109 [2024-05-13 20:43:13.245153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.109 [2024-05-13 20:43:15.404340] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:04.109 [2024-05-13 20:43:15.404382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.109 [2024-05-13 20:43:15.404393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-13 20:43:15.404403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.109 [2024-05-13 20:43:15.404410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-13 20:43:15.404418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.109 [2024-05-13 20:43:15.404425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-13 20:43:15.404433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.109 [2024-05-13 20:43:15.404440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-13 20:43:15.404447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.109 [2024-05-13 20:43:15.404469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.109 [2024-05-13 20:43:15.404482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c910 (9): Bad file descriptor 00:30:04.109 [2024-05-13 20:43:15.537529] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.109 Running I/O for 1 seconds... 00:30:04.109 00:30:04.109 Latency(us) 00:30:04.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.109 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:04.109 Verification LBA range: start 0x0 length 0x4000 00:30:04.109 NVMe0n1 : 1.01 9175.46 35.84 0.00 0.00 13886.38 3003.73 11468.80 00:30:04.109 =================================================================================================================== 00:30:04.109 Total : 9175.46 35.84 0.00 0.00 13886.38 3003.73 11468.80 00:30:04.109 20:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.109 20:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:04.109 20:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.369 20:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.369 20:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:04.369 20:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.629 20:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3225428 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3225428 ']' 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3225428 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3225428 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:07.937 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:07.938 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3225428' 00:30:07.938 killing process with pid 3225428 00:30:07.938 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3225428 00:30:07.938 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3225428 00:30:07.938 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:07.938 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.201 rmmod nvme_tcp 00:30:08.201 rmmod nvme_fabrics 00:30:08.201 rmmod nvme_keyring 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3221724 ']' 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3221724 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3221724 ']' 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3221724 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:08.201 20:43:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3221724 00:30:08.201 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:08.201 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:08.201 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3221724' 00:30:08.201 killing process with pid 3221724 00:30:08.201 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3221724 00:30:08.201 [2024-05-13 20:43:24.047961] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:08.201 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3221724 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.462 20:43:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.374 20:43:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:10.374 00:30:10.374 real 0m39.815s 00:30:10.374 user 2m1.365s 00:30:10.374 sys 0m8.442s 00:30:10.374 20:43:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:10.375 20:43:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:10.375 ************************************ 00:30:10.375 END TEST nvmf_failover 00:30:10.375 ************************************ 00:30:10.375 20:43:26 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:10.375 20:43:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:10.375 20:43:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:10.375 20:43:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.636 ************************************ 00:30:10.636 START TEST nvmf_host_discovery 00:30:10.636 ************************************ 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:10.636 * Looking for test storage... 00:30:10.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:10.636 20:43:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:18.781 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:18.781 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:18.781 Found net devices under 0000:31:00.0: cvl_0_0 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:18.781 Found net devices under 0000:31:00.1: cvl_0_1 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.781 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:30:18.782 00:30:18.782 --- 10.0.0.2 ping statistics --- 00:30:18.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.782 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:30:18.782 00:30:18.782 --- 10.0.0.1 ping statistics --- 00:30:18.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.782 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3232129 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3232129 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3232129 ']' 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.782 20:43:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:18.782 [2024-05-13 20:43:34.497947] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:30:18.782 [2024-05-13 20:43:34.498000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.782 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.782 [2024-05-13 20:43:34.590057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.782 [2024-05-13 20:43:34.682914] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.782 [2024-05-13 20:43:34.682969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.782 [2024-05-13 20:43:34.682977] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.782 [2024-05-13 20:43:34.682984] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.782 [2024-05-13 20:43:34.682989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.782 [2024-05-13 20:43:34.683014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.355 [2024-05-13 20:43:35.289937] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.355 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.355 [2024-05-13 20:43:35.297909] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:19.355 [2024-05-13 20:43:35.298132] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.615 null0 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.615 null1 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3232247 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3232247 /tmp/host.sock 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3232247 ']' 00:30:19.615 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:30:19.616 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:19.616 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:19.616 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:19.616 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:19.616 20:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.616 20:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:19.616 [2024-05-13 20:43:35.384709] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:30:19.616 [2024-05-13 20:43:35.384764] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232247 ] 00:30:19.616 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.616 [2024-05-13 20:43:35.454107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.616 [2024-05-13 20:43:35.529011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.184 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.443 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.443 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:20.443 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:20.443 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.443 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.703 [2024-05-13 20:43:36.473144] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:20.703 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.704 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.963 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:30:20.963 20:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:21.533 [2024-05-13 20:43:37.190505] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:21.533 [2024-05-13 20:43:37.190527] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:21.533 [2024-05-13 20:43:37.190540] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:21.533 [2024-05-13 20:43:37.319972] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:21.793 [2024-05-13 20:43:37.502821] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:21.793 [2024-05-13 20:43:37.502845] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:21.793 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.053 20:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.313 [2024-05-13 20:43:38.009108] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:22.313 [2024-05-13 20:43:38.009645] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:22.313 [2024-05-13 20:43:38.009669] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:22.313 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.314 [2024-05-13 20:43:38.138072] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:22.314 20:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:22.314 [2024-05-13 20:43:38.201805] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:22.314 [2024-05-13 20:43:38.201827] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:22.314 [2024-05-13 20:43:38.201833] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.253 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.515 [2024-05-13 20:43:39.289098] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:23.515 [2024-05-13 20:43:39.289120] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:23.515 [2024-05-13 20:43:39.294342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.515 [2024-05-13 20:43:39.294360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.515 [2024-05-13 20:43:39.294370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.515 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.515 [2024-05-13 20:43:39.294378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.515 [2024-05-13 20:43:39.294386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.515 [2024-05-13 20:43:39.294393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.515 [2024-05-13 20:43:39.294406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:23.515 [2024-05-13 20:43:39.294413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.516 [2024-05-13 20:43:39.294420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.516 [2024-05-13 20:43:39.304357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.314396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 [2024-05-13 20:43:39.314677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.315063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.315073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.315081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.315093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.315110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.315117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.315124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.315137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.516 [2024-05-13 20:43:39.324452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 [2024-05-13 20:43:39.324799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.325175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.325184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.325191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.325202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.325223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.325230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.325237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.325251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 [2024-05-13 20:43:39.334504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 [2024-05-13 20:43:39.334758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.335131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.335140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.335147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.335159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.335169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.335175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.335182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.335192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 [2024-05-13 20:43:39.344560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 [2024-05-13 20:43:39.344847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.345086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.345096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.345102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.345113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.345123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.345129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.345136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.345146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:23.516 [2024-05-13 20:43:39.354613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:23.516 [2024-05-13 20:43:39.354974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.516 [2024-05-13 20:43:39.355324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.355336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.355344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.355355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.355376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.355383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.355390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.355401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:23.516 [2024-05-13 20:43:39.364665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 [2024-05-13 20:43:39.365014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.365236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.365245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.365252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.365263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.365280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.365287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.365294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.365305] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 [2024-05-13 20:43:39.374718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:23.516 [2024-05-13 20:43:39.375063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.375560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.516 [2024-05-13 20:43:39.375598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b88d80 with addr=10.0.0.2, port=4420 00:30:23.516 [2024-05-13 20:43:39.375608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b88d80 is same with the state(5) to be set 00:30:23.516 [2024-05-13 20:43:39.375627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88d80 (9): Bad file descriptor 00:30:23.516 [2024-05-13 20:43:39.375660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:23.516 [2024-05-13 20:43:39.375669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:23.516 [2024-05-13 20:43:39.375676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:23.516 [2024-05-13 20:43:39.375691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.516 [2024-05-13 20:43:39.377653] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:23.516 [2024-05-13 20:43:39.377676] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.516 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:23.517 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:23.777 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.778 20:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.158 [2024-05-13 20:43:40.748508] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:25.158 [2024-05-13 20:43:40.748530] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:25.158 [2024-05-13 20:43:40.748545] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:25.158 [2024-05-13 20:43:40.836835] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:25.158 [2024-05-13 20:43:40.941964] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:25.158 [2024-05-13 20:43:40.941997] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.158 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.159 request: 00:30:25.159 { 00:30:25.159 "name": "nvme", 00:30:25.159 "trtype": "tcp", 00:30:25.159 "traddr": "10.0.0.2", 00:30:25.159 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:25.159 "adrfam": "ipv4", 00:30:25.159 "trsvcid": "8009", 00:30:25.159 "wait_for_attach": true, 00:30:25.159 "method": "bdev_nvme_start_discovery", 00:30:25.159 "req_id": 1 00:30:25.159 } 00:30:25.159 Got JSON-RPC error response 00:30:25.159 response: 00:30:25.159 { 00:30:25.159 "code": -17, 00:30:25.159 "message": "File exists" 00:30:25.159 } 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:25.159 20:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.159 request: 00:30:25.159 { 00:30:25.159 "name": "nvme_second", 00:30:25.159 "trtype": "tcp", 00:30:25.159 "traddr": "10.0.0.2", 00:30:25.159 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:25.159 "adrfam": "ipv4", 00:30:25.159 "trsvcid": "8009", 00:30:25.159 "wait_for_attach": true, 00:30:25.159 "method": "bdev_nvme_start_discovery", 00:30:25.159 "req_id": 1 00:30:25.159 } 00:30:25.159 Got JSON-RPC error response 00:30:25.159 response: 00:30:25.159 { 00:30:25.159 "code": -17, 00:30:25.159 "message": "File exists" 00:30:25.159 } 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:25.159 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:25.418 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:25.418 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:25.418 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.418 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.419 20:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.359 [2024-05-13 20:43:42.165624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.359 [2024-05-13 20:43:42.165999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.359 [2024-05-13 20:43:42.166013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84e30 with addr=10.0.0.2, port=8010 00:30:26.359 [2024-05-13 20:43:42.166028] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:26.359 [2024-05-13 20:43:42.166036] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:26.359 [2024-05-13 20:43:42.166043] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:27.299 [2024-05-13 20:43:43.167872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.300 [2024-05-13 20:43:43.168168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.300 [2024-05-13 20:43:43.168178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84e30 with addr=10.0.0.2, port=8010 00:30:27.300 [2024-05-13 20:43:43.168190] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:27.300 [2024-05-13 20:43:43.168196] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:27.300 [2024-05-13 20:43:43.168203] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:28.240 [2024-05-13 20:43:44.169811] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:28.240 request: 00:30:28.240 { 00:30:28.240 "name": "nvme_second", 00:30:28.240 "trtype": "tcp", 00:30:28.240 "traddr": "10.0.0.2", 00:30:28.240 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:28.240 "adrfam": "ipv4", 00:30:28.240 "trsvcid": "8010", 00:30:28.240 "attach_timeout_ms": 3000, 00:30:28.240 "method": "bdev_nvme_start_discovery", 00:30:28.240 "req_id": 1 00:30:28.240 } 00:30:28.240 Got JSON-RPC error response 00:30:28.240 response: 00:30:28.240 { 00:30:28.240 "code": -110, 00:30:28.240 "message": "Connection timed out" 00:30:28.240 } 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:28.240 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3232247 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.501 rmmod nvme_tcp 00:30:28.501 rmmod nvme_fabrics 00:30:28.501 rmmod nvme_keyring 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3232129 ']' 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3232129 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3232129 ']' 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3232129 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3232129 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3232129' 00:30:28.501 killing process with pid 3232129 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3232129 00:30:28.501 [2024-05-13 20:43:44.352508] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:28.501 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3232129 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.762 20:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.675 20:43:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:30.675 00:30:30.675 real 0m20.196s 00:30:30.675 user 0m22.791s 00:30:30.675 sys 0m7.223s 00:30:30.675 20:43:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:30.675 20:43:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.675 ************************************ 00:30:30.675 END TEST nvmf_host_discovery 00:30:30.675 ************************************ 00:30:30.675 20:43:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:30.675 20:43:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:30.675 20:43:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:30.675 20:43:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.675 ************************************ 00:30:30.675 START TEST nvmf_host_multipath_status 00:30:30.675 ************************************ 00:30:30.675 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:30.939 * Looking for test storage... 00:30:30.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.939 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:30.940 20:43:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:39.154 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:39.155 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:39.155 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:39.155 Found net devices under 0000:31:00.0: cvl_0_0 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:39.155 Found net devices under 0000:31:00.1: cvl_0_1 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:39.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:30:39.155 00:30:39.155 --- 10.0.0.2 ping statistics --- 00:30:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.155 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:30:39.155 00:30:39.155 --- 10.0.0.1 ping statistics --- 00:30:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.155 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.155 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3238759 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3238759 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3238759 ']' 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:39.156 20:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:39.156 [2024-05-13 20:43:55.010625] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:30:39.156 [2024-05-13 20:43:55.010691] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.156 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.156 [2024-05-13 20:43:55.089072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:39.416 [2024-05-13 20:43:55.163284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.416 [2024-05-13 20:43:55.163330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.416 [2024-05-13 20:43:55.163338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.416 [2024-05-13 20:43:55.163345] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.416 [2024-05-13 20:43:55.163350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.416 [2024-05-13 20:43:55.163423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.416 [2024-05-13 20:43:55.163439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3238759 00:30:39.987 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:40.248 [2024-05-13 20:43:55.959618] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.248 20:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:40.248 Malloc0 00:30:40.248 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:40.508 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.508 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.769 [2024-05-13 20:43:56.583879] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:40.769 [2024-05-13 20:43:56.584123] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.769 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:41.030 [2024-05-13 20:43:56.724405] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3239122 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3239122 /var/tmp/bdevperf.sock 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3239122 ']' 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:41.030 20:43:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:41.973 20:43:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:41.973 20:43:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:30:41.973 20:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:41.973 20:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:42.234 Nvme0n1 00:30:42.234 20:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:42.495 Nvme0n1 00:30:42.495 20:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:42.495 20:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:45.040 20:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:45.040 20:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:45.040 20:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:45.040 20:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:45.980 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:45.981 20:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:46.243 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:46.243 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:46.243 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.243 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.505 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:46.766 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:47.026 20:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:47.286 20:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:48.228 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:48.228 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:48.228 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.228 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.488 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:48.749 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:49.008 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.008 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:49.008 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.008 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:49.268 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.268 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:49.268 20:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:49.268 20:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:49.529 20:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:50.468 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:50.468 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:50.468 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.468 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.729 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.990 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.990 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.990 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.990 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:51.251 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.251 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:51.251 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.251 20:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:51.251 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.251 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:51.251 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.251 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.512 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.512 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:51.512 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:51.772 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:51.772 20:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.156 20:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.416 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.676 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.676 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:53.676 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.676 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:53.935 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.935 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:53.935 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:53.935 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:54.194 20:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:55.133 20:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:55.133 20:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:55.133 20:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.133 20:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.394 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:55.654 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.654 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:55.654 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.654 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.914 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:56.174 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.174 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:56.174 20:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:56.174 20:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:56.435 20:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:57.377 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:57.377 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:57.377 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.377 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:57.638 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:57.638 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:57.638 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.638 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.898 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:58.159 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.159 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:58.159 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.159 20:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:58.159 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.159 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:58.159 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.159 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:58.420 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.420 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:58.679 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:58.679 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:58.680 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:58.940 20:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:59.881 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:59.881 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:59.881 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.881 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:00.141 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.141 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:00.141 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.142 20:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:00.142 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.142 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:00.142 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.142 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:00.402 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.402 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:00.402 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.402 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.663 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.924 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.924 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:00.924 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.924 20:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:01.187 20:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:02.127 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:02.127 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:02.127 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.127 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:02.388 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.388 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:02.388 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.388 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.649 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.909 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.170 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.170 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:03.170 20:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:03.431 20:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:03.431 20:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:04.472 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:04.472 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:04.472 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.472 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:04.732 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.732 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:04.732 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.732 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:04.732 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.733 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:04.733 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.733 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:04.994 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.994 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:04.994 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.994 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:05.254 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.254 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:05.254 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:05.254 20:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.254 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.254 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:05.254 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.254 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:05.515 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.515 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:05.515 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:05.515 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:05.776 20:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:06.720 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:06.720 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:06.720 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.720 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:06.997 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.997 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:06.997 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.997 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:06.997 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.997 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:07.301 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.301 20:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:07.301 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.301 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:07.301 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.301 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.574 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3239122 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3239122 ']' 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3239122 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3239122 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3239122' 00:31:07.838 killing process with pid 3239122 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3239122 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3239122 00:31:07.838 Connection closed with partial response: 00:31:07.838 00:31:07.838 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3239122 00:31:07.838 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:07.838 [2024-05-13 20:43:56.768758] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:31:07.838 [2024-05-13 20:43:56.768824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239122 ] 00:31:07.838 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.838 [2024-05-13 20:43:56.831321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.838 [2024-05-13 20:43:56.882964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.838 Running I/O for 90 seconds... 00:31:07.838 [2024-05-13 20:44:09.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.783935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.783968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.783975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.783985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.838 [2024-05-13 20:44:09.784215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:07.838 [2024-05-13 20:44:09.784226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.784231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.784242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.784248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.785787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.785804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.785821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.785837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.785852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.785869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.785984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.785995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.839 [2024-05-13 20:44:09.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.839 [2024-05-13 20:44:09.786420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:07.839 [2024-05-13 20:44:09.786431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.786982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.786996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:07.840 [2024-05-13 20:44:09.787189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.840 [2024-05-13 20:44:09.787194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.787529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.787984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.787989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.788004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.788009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.788025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.788029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.788045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.841 [2024-05-13 20:44:09.788050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.788066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.788071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:09.788087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:09.788091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:21.595419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:21.595456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:21.595489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:21.595495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:07.841 [2024-05-13 20:44:21.595506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.841 [2024-05-13 20:44:21.595511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.595527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.595820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.595967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.595981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.595992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.595997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.596524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.596534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.596539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.597345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.597357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.597369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.597375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.597385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.842 [2024-05-13 20:44:21.597390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:07.842 [2024-05-13 20:44:21.597401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.842 [2024-05-13 20:44:21.597405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:07.842 Received shutdown signal, test time was about 25.098848 seconds 00:31:07.842 00:31:07.843 Latency(us) 00:31:07.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.843 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:07.843 Verification LBA range: start 0x0 length 0x4000 00:31:07.843 Nvme0n1 : 25.10 11147.63 43.55 0.00 0.00 11464.08 399.36 3019898.88 00:31:07.843 =================================================================================================================== 00:31:07.843 Total : 11147.63 43.55 0.00 0.00 11464.08 399.36 3019898.88 00:31:07.843 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.102 rmmod nvme_tcp 00:31:08.102 rmmod nvme_fabrics 00:31:08.102 rmmod nvme_keyring 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3238759 ']' 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3238759 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3238759 ']' 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3238759 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:08.102 20:44:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3238759 00:31:08.102 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:08.102 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:08.102 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3238759' 00:31:08.102 killing process with pid 3238759 00:31:08.102 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3238759 00:31:08.102 [2024-05-13 20:44:24.037232] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3238759 00:31:08.102 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.363 20:44:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.908 20:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:10.908 00:31:10.908 real 0m39.642s 00:31:10.908 user 1m39.270s 00:31:10.908 sys 0m11.177s 00:31:10.908 20:44:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.908 20:44:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:10.908 ************************************ 00:31:10.908 END TEST nvmf_host_multipath_status 00:31:10.908 ************************************ 00:31:10.908 20:44:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:10.908 20:44:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:10.908 20:44:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.908 20:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:10.908 ************************************ 00:31:10.908 START TEST nvmf_discovery_remove_ifc 00:31:10.908 ************************************ 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:10.908 * Looking for test storage... 00:31:10.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:10.908 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:10.909 20:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.050 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:19.051 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:19.051 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:19.051 Found net devices under 0000:31:00.0: cvl_0_0 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:19.051 Found net devices under 0000:31:00.1: cvl_0_1 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:19.051 20:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:19.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:31:19.051 00:31:19.051 --- 10.0.0.2 ping statistics --- 00:31:19.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.051 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:31:19.051 00:31:19.051 --- 10.0.0.1 ping statistics --- 00:31:19.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.051 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3249161 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3249161 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3249161 ']' 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:19.051 [2024-05-13 20:44:34.185978] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:31:19.051 [2024-05-13 20:44:34.186043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.051 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.051 [2024-05-13 20:44:34.281602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.051 [2024-05-13 20:44:34.374471] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.051 [2024-05-13 20:44:34.374534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.051 [2024-05-13 20:44:34.374548] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.051 [2024-05-13 20:44:34.374555] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.051 [2024-05-13 20:44:34.374560] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.051 [2024-05-13 20:44:34.374593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:19.051 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.052 20:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.313 [2024-05-13 20:44:35.025498] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.313 [2024-05-13 20:44:35.033477] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:19.313 [2024-05-13 20:44:35.033760] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:19.313 null0 00:31:19.313 [2024-05-13 20:44:35.065691] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3249287 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3249287 /tmp/host.sock 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3249287 ']' 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:19.313 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.313 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:19.313 [2024-05-13 20:44:35.138390] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:31:19.313 [2024-05-13 20:44:35.138451] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249287 ] 00:31:19.313 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.313 [2024-05-13 20:44:35.209237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.573 [2024-05-13 20:44:35.283599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.145 20:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.529 [2024-05-13 20:44:37.033497] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:21.529 [2024-05-13 20:44:37.033517] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:21.529 [2024-05-13 20:44:37.033530] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:21.529 [2024-05-13 20:44:37.122803] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:21.529 [2024-05-13 20:44:37.184276] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:21.529 [2024-05-13 20:44:37.184329] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:21.529 [2024-05-13 20:44:37.184353] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:21.529 [2024-05-13 20:44:37.184368] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:21.529 [2024-05-13 20:44:37.184387] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:21.529 [2024-05-13 20:44:37.192905] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14fb620 was disconnected and freed. delete nvme_qpair. 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:21.529 20:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:22.913 20:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:23.854 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.854 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:23.855 20:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:24.796 20:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:25.738 20:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.123 [2024-05-13 20:44:42.624677] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:27.123 [2024-05-13 20:44:42.624722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.123 [2024-05-13 20:44:42.624734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.123 [2024-05-13 20:44:42.624745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.123 [2024-05-13 20:44:42.624753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.123 [2024-05-13 20:44:42.624761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.123 [2024-05-13 20:44:42.624768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.123 [2024-05-13 20:44:42.624776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.123 [2024-05-13 20:44:42.624784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.123 [2024-05-13 20:44:42.624792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.123 [2024-05-13 20:44:42.624800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.123 [2024-05-13 20:44:42.624807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2900 is same with the state(5) to be set 00:31:27.123 [2024-05-13 20:44:42.634697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2900 (9): Bad file descriptor 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.123 20:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.123 [2024-05-13 20:44:42.644738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.063 [2024-05-13 20:44:43.698341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:29.005 [2024-05-13 20:44:44.722358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:29.005 [2024-05-13 20:44:44.722397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2900 with addr=10.0.0.2, port=4420 00:31:29.005 [2024-05-13 20:44:44.722410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2900 is same with the state(5) to be set 00:31:29.005 [2024-05-13 20:44:44.722771] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2900 (9): Bad file descriptor 00:31:29.005 [2024-05-13 20:44:44.722793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:29.005 [2024-05-13 20:44:44.722813] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:29.005 [2024-05-13 20:44:44.722835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.005 [2024-05-13 20:44:44.722845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.005 [2024-05-13 20:44:44.722855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.005 [2024-05-13 20:44:44.722862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.005 [2024-05-13 20:44:44.722870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.005 [2024-05-13 20:44:44.722877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.005 [2024-05-13 20:44:44.722885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.005 [2024-05-13 20:44:44.722892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.005 [2024-05-13 20:44:44.722900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.005 [2024-05-13 20:44:44.722907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.005 [2024-05-13 20:44:44.722914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:29.005 [2024-05-13 20:44:44.723432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1d90 (9): Bad file descriptor 00:31:29.005 [2024-05-13 20:44:44.724444] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:29.005 [2024-05-13 20:44:44.724455] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:29.005 20:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.005 20:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.005 20:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.946 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:30.207 20:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.148 [2024-05-13 20:44:46.780501] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:31.148 [2024-05-13 20:44:46.780522] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:31.148 [2024-05-13 20:44:46.780536] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.148 [2024-05-13 20:44:46.868816] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.148 20:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.148 20:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:31.148 20:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.148 [2024-05-13 20:44:47.091260] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:31.148 [2024-05-13 20:44:47.091301] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:31.148 [2024-05-13 20:44:47.091328] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:31.148 [2024-05-13 20:44:47.091343] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:31.148 [2024-05-13 20:44:47.091351] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:31.409 [2024-05-13 20:44:47.097456] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14d1380 was disconnected and freed. delete nvme_qpair. 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3249287 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3249287 ']' 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3249287 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3249287 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3249287' 00:31:32.352 killing process with pid 3249287 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3249287 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3249287 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:32.352 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:32.352 rmmod nvme_tcp 00:31:32.352 rmmod nvme_fabrics 00:31:32.613 rmmod nvme_keyring 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3249161 ']' 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3249161 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3249161 ']' 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3249161 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3249161 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3249161' 00:31:32.613 killing process with pid 3249161 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3249161 00:31:32.613 [2024-05-13 20:44:48.389397] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3249161 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:32.613 20:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.159 20:44:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:35.159 00:31:35.159 real 0m24.238s 00:31:35.159 user 0m27.994s 00:31:35.159 sys 0m6.966s 00:31:35.159 20:44:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:35.159 20:44:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:35.159 ************************************ 00:31:35.159 END TEST nvmf_discovery_remove_ifc 00:31:35.159 ************************************ 00:31:35.159 20:44:50 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:35.159 20:44:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:35.159 20:44:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:35.159 20:44:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.159 ************************************ 00:31:35.159 START TEST nvmf_identify_kernel_target 00:31:35.159 ************************************ 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:35.159 * Looking for test storage... 00:31:35.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.159 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:35.160 20:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:43.304 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:43.304 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:43.304 Found net devices under 0000:31:00.0: cvl_0_0 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:43.304 Found net devices under 0000:31:00.1: cvl_0_1 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.304 20:44:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.304 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:43.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:31:43.304 00:31:43.304 --- 10.0.0.2 ping statistics --- 00:31:43.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.304 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:31:43.304 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:31:43.305 00:31:43.305 --- 10.0.0.1 ping statistics --- 00:31:43.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.305 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:43.305 20:44:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:47.516 Waiting for block devices as requested 00:31:47.516 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:47.516 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:47.516 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:47.776 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:47.776 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:47.776 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:47.776 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:48.036 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:48.036 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:48.036 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:48.298 No valid GPT data, bailing 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:48.298 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:48.565 00:31:48.565 Discovery Log Number of Records 2, Generation counter 2 00:31:48.565 =====Discovery Log Entry 0====== 00:31:48.565 trtype: tcp 00:31:48.565 adrfam: ipv4 00:31:48.565 subtype: current discovery subsystem 00:31:48.565 treq: not specified, sq flow control disable supported 00:31:48.565 portid: 1 00:31:48.565 trsvcid: 4420 00:31:48.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:48.565 traddr: 10.0.0.1 00:31:48.565 eflags: none 00:31:48.565 sectype: none 00:31:48.565 =====Discovery Log Entry 1====== 00:31:48.565 trtype: tcp 00:31:48.565 adrfam: ipv4 00:31:48.565 subtype: nvme subsystem 00:31:48.565 treq: not specified, sq flow control disable supported 00:31:48.565 portid: 1 00:31:48.565 trsvcid: 4420 00:31:48.565 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:48.565 traddr: 10.0.0.1 00:31:48.565 eflags: none 00:31:48.565 sectype: none 00:31:48.565 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:48.565 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:48.565 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.565 ===================================================== 00:31:48.565 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:48.565 ===================================================== 00:31:48.565 Controller Capabilities/Features 00:31:48.565 ================================ 00:31:48.565 Vendor ID: 0000 00:31:48.566 Subsystem Vendor ID: 0000 00:31:48.566 Serial Number: 06168d98c854c1a3b556 00:31:48.566 Model Number: Linux 00:31:48.566 Firmware Version: 6.7.0-68 00:31:48.566 Recommended Arb Burst: 0 00:31:48.566 IEEE OUI Identifier: 00 00 00 00:31:48.566 Multi-path I/O 00:31:48.566 May have multiple subsystem ports: No 00:31:48.566 May have multiple controllers: No 00:31:48.566 Associated with SR-IOV VF: No 00:31:48.566 Max Data Transfer Size: Unlimited 00:31:48.566 Max Number of Namespaces: 0 00:31:48.566 Max Number of I/O Queues: 1024 00:31:48.566 NVMe Specification Version (VS): 1.3 00:31:48.566 NVMe Specification Version (Identify): 1.3 00:31:48.566 Maximum Queue Entries: 1024 00:31:48.566 Contiguous Queues Required: No 00:31:48.566 Arbitration Mechanisms Supported 00:31:48.566 Weighted Round Robin: Not Supported 00:31:48.566 Vendor Specific: Not Supported 00:31:48.566 Reset Timeout: 7500 ms 00:31:48.566 Doorbell Stride: 4 bytes 00:31:48.566 NVM Subsystem Reset: Not Supported 00:31:48.566 Command Sets Supported 00:31:48.566 NVM Command Set: Supported 00:31:48.566 Boot Partition: Not Supported 00:31:48.566 Memory Page Size Minimum: 4096 bytes 00:31:48.566 Memory Page Size Maximum: 4096 bytes 00:31:48.566 Persistent Memory Region: Not Supported 00:31:48.566 Optional Asynchronous Events Supported 00:31:48.566 Namespace Attribute Notices: Not Supported 00:31:48.566 Firmware Activation Notices: Not Supported 00:31:48.566 ANA Change Notices: Not Supported 00:31:48.566 PLE Aggregate Log Change Notices: Not Supported 00:31:48.566 LBA Status Info Alert Notices: Not Supported 00:31:48.566 EGE Aggregate Log Change Notices: Not Supported 00:31:48.566 Normal NVM Subsystem Shutdown event: Not Supported 00:31:48.566 Zone Descriptor Change Notices: Not Supported 00:31:48.566 Discovery Log Change Notices: Supported 00:31:48.566 Controller Attributes 00:31:48.566 128-bit Host Identifier: Not Supported 00:31:48.566 Non-Operational Permissive Mode: Not Supported 00:31:48.566 NVM Sets: Not Supported 00:31:48.566 Read Recovery Levels: Not Supported 00:31:48.566 Endurance Groups: Not Supported 00:31:48.566 Predictable Latency Mode: Not Supported 00:31:48.566 Traffic Based Keep ALive: Not Supported 00:31:48.566 Namespace Granularity: Not Supported 00:31:48.566 SQ Associations: Not Supported 00:31:48.566 UUID List: Not Supported 00:31:48.566 Multi-Domain Subsystem: Not Supported 00:31:48.566 Fixed Capacity Management: Not Supported 00:31:48.566 Variable Capacity Management: Not Supported 00:31:48.566 Delete Endurance Group: Not Supported 00:31:48.566 Delete NVM Set: Not Supported 00:31:48.566 Extended LBA Formats Supported: Not Supported 00:31:48.566 Flexible Data Placement Supported: Not Supported 00:31:48.566 00:31:48.566 Controller Memory Buffer Support 00:31:48.566 ================================ 00:31:48.566 Supported: No 00:31:48.566 00:31:48.566 Persistent Memory Region Support 00:31:48.566 ================================ 00:31:48.566 Supported: No 00:31:48.566 00:31:48.566 Admin Command Set Attributes 00:31:48.566 ============================ 00:31:48.566 Security Send/Receive: Not Supported 00:31:48.566 Format NVM: Not Supported 00:31:48.566 Firmware Activate/Download: Not Supported 00:31:48.566 Namespace Management: Not Supported 00:31:48.566 Device Self-Test: Not Supported 00:31:48.566 Directives: Not Supported 00:31:48.566 NVMe-MI: Not Supported 00:31:48.566 Virtualization Management: Not Supported 00:31:48.566 Doorbell Buffer Config: Not Supported 00:31:48.566 Get LBA Status Capability: Not Supported 00:31:48.566 Command & Feature Lockdown Capability: Not Supported 00:31:48.566 Abort Command Limit: 1 00:31:48.566 Async Event Request Limit: 1 00:31:48.566 Number of Firmware Slots: N/A 00:31:48.566 Firmware Slot 1 Read-Only: N/A 00:31:48.566 Firmware Activation Without Reset: N/A 00:31:48.566 Multiple Update Detection Support: N/A 00:31:48.566 Firmware Update Granularity: No Information Provided 00:31:48.566 Per-Namespace SMART Log: No 00:31:48.566 Asymmetric Namespace Access Log Page: Not Supported 00:31:48.566 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:48.566 Command Effects Log Page: Not Supported 00:31:48.566 Get Log Page Extended Data: Supported 00:31:48.566 Telemetry Log Pages: Not Supported 00:31:48.566 Persistent Event Log Pages: Not Supported 00:31:48.566 Supported Log Pages Log Page: May Support 00:31:48.566 Commands Supported & Effects Log Page: Not Supported 00:31:48.566 Feature Identifiers & Effects Log Page:May Support 00:31:48.566 NVMe-MI Commands & Effects Log Page: May Support 00:31:48.566 Data Area 4 for Telemetry Log: Not Supported 00:31:48.566 Error Log Page Entries Supported: 1 00:31:48.566 Keep Alive: Not Supported 00:31:48.566 00:31:48.566 NVM Command Set Attributes 00:31:48.566 ========================== 00:31:48.566 Submission Queue Entry Size 00:31:48.566 Max: 1 00:31:48.566 Min: 1 00:31:48.566 Completion Queue Entry Size 00:31:48.566 Max: 1 00:31:48.566 Min: 1 00:31:48.566 Number of Namespaces: 0 00:31:48.566 Compare Command: Not Supported 00:31:48.566 Write Uncorrectable Command: Not Supported 00:31:48.566 Dataset Management Command: Not Supported 00:31:48.566 Write Zeroes Command: Not Supported 00:31:48.566 Set Features Save Field: Not Supported 00:31:48.566 Reservations: Not Supported 00:31:48.566 Timestamp: Not Supported 00:31:48.566 Copy: Not Supported 00:31:48.566 Volatile Write Cache: Not Present 00:31:48.566 Atomic Write Unit (Normal): 1 00:31:48.566 Atomic Write Unit (PFail): 1 00:31:48.566 Atomic Compare & Write Unit: 1 00:31:48.566 Fused Compare & Write: Not Supported 00:31:48.566 Scatter-Gather List 00:31:48.566 SGL Command Set: Supported 00:31:48.566 SGL Keyed: Not Supported 00:31:48.566 SGL Bit Bucket Descriptor: Not Supported 00:31:48.566 SGL Metadata Pointer: Not Supported 00:31:48.566 Oversized SGL: Not Supported 00:31:48.566 SGL Metadata Address: Not Supported 00:31:48.566 SGL Offset: Supported 00:31:48.566 Transport SGL Data Block: Not Supported 00:31:48.566 Replay Protected Memory Block: Not Supported 00:31:48.566 00:31:48.566 Firmware Slot Information 00:31:48.566 ========================= 00:31:48.566 Active slot: 0 00:31:48.566 00:31:48.566 00:31:48.566 Error Log 00:31:48.566 ========= 00:31:48.566 00:31:48.566 Active Namespaces 00:31:48.566 ================= 00:31:48.566 Discovery Log Page 00:31:48.566 ================== 00:31:48.566 Generation Counter: 2 00:31:48.566 Number of Records: 2 00:31:48.566 Record Format: 0 00:31:48.566 00:31:48.566 Discovery Log Entry 0 00:31:48.566 ---------------------- 00:31:48.566 Transport Type: 3 (TCP) 00:31:48.566 Address Family: 1 (IPv4) 00:31:48.566 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:48.566 Entry Flags: 00:31:48.566 Duplicate Returned Information: 0 00:31:48.566 Explicit Persistent Connection Support for Discovery: 0 00:31:48.566 Transport Requirements: 00:31:48.566 Secure Channel: Not Specified 00:31:48.566 Port ID: 1 (0x0001) 00:31:48.566 Controller ID: 65535 (0xffff) 00:31:48.566 Admin Max SQ Size: 32 00:31:48.566 Transport Service Identifier: 4420 00:31:48.566 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:48.566 Transport Address: 10.0.0.1 00:31:48.566 Discovery Log Entry 1 00:31:48.566 ---------------------- 00:31:48.566 Transport Type: 3 (TCP) 00:31:48.566 Address Family: 1 (IPv4) 00:31:48.566 Subsystem Type: 2 (NVM Subsystem) 00:31:48.566 Entry Flags: 00:31:48.566 Duplicate Returned Information: 0 00:31:48.566 Explicit Persistent Connection Support for Discovery: 0 00:31:48.566 Transport Requirements: 00:31:48.566 Secure Channel: Not Specified 00:31:48.566 Port ID: 1 (0x0001) 00:31:48.566 Controller ID: 65535 (0xffff) 00:31:48.567 Admin Max SQ Size: 32 00:31:48.567 Transport Service Identifier: 4420 00:31:48.567 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:48.567 Transport Address: 10.0.0.1 00:31:48.567 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.567 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.567 get_feature(0x01) failed 00:31:48.567 get_feature(0x02) failed 00:31:48.567 get_feature(0x04) failed 00:31:48.567 ===================================================== 00:31:48.567 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:48.567 ===================================================== 00:31:48.567 Controller Capabilities/Features 00:31:48.567 ================================ 00:31:48.567 Vendor ID: 0000 00:31:48.567 Subsystem Vendor ID: 0000 00:31:48.567 Serial Number: 46e853eebe68d5596e05 00:31:48.567 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:48.567 Firmware Version: 6.7.0-68 00:31:48.567 Recommended Arb Burst: 6 00:31:48.567 IEEE OUI Identifier: 00 00 00 00:31:48.567 Multi-path I/O 00:31:48.567 May have multiple subsystem ports: Yes 00:31:48.567 May have multiple controllers: Yes 00:31:48.567 Associated with SR-IOV VF: No 00:31:48.567 Max Data Transfer Size: Unlimited 00:31:48.567 Max Number of Namespaces: 1024 00:31:48.567 Max Number of I/O Queues: 128 00:31:48.567 NVMe Specification Version (VS): 1.3 00:31:48.567 NVMe Specification Version (Identify): 1.3 00:31:48.567 Maximum Queue Entries: 1024 00:31:48.567 Contiguous Queues Required: No 00:31:48.567 Arbitration Mechanisms Supported 00:31:48.567 Weighted Round Robin: Not Supported 00:31:48.567 Vendor Specific: Not Supported 00:31:48.567 Reset Timeout: 7500 ms 00:31:48.567 Doorbell Stride: 4 bytes 00:31:48.567 NVM Subsystem Reset: Not Supported 00:31:48.567 Command Sets Supported 00:31:48.567 NVM Command Set: Supported 00:31:48.567 Boot Partition: Not Supported 00:31:48.567 Memory Page Size Minimum: 4096 bytes 00:31:48.567 Memory Page Size Maximum: 4096 bytes 00:31:48.567 Persistent Memory Region: Not Supported 00:31:48.567 Optional Asynchronous Events Supported 00:31:48.567 Namespace Attribute Notices: Supported 00:31:48.567 Firmware Activation Notices: Not Supported 00:31:48.567 ANA Change Notices: Supported 00:31:48.567 PLE Aggregate Log Change Notices: Not Supported 00:31:48.567 LBA Status Info Alert Notices: Not Supported 00:31:48.567 EGE Aggregate Log Change Notices: Not Supported 00:31:48.567 Normal NVM Subsystem Shutdown event: Not Supported 00:31:48.567 Zone Descriptor Change Notices: Not Supported 00:31:48.567 Discovery Log Change Notices: Not Supported 00:31:48.567 Controller Attributes 00:31:48.567 128-bit Host Identifier: Supported 00:31:48.567 Non-Operational Permissive Mode: Not Supported 00:31:48.567 NVM Sets: Not Supported 00:31:48.567 Read Recovery Levels: Not Supported 00:31:48.567 Endurance Groups: Not Supported 00:31:48.567 Predictable Latency Mode: Not Supported 00:31:48.567 Traffic Based Keep ALive: Supported 00:31:48.567 Namespace Granularity: Not Supported 00:31:48.567 SQ Associations: Not Supported 00:31:48.567 UUID List: Not Supported 00:31:48.567 Multi-Domain Subsystem: Not Supported 00:31:48.567 Fixed Capacity Management: Not Supported 00:31:48.567 Variable Capacity Management: Not Supported 00:31:48.567 Delete Endurance Group: Not Supported 00:31:48.567 Delete NVM Set: Not Supported 00:31:48.567 Extended LBA Formats Supported: Not Supported 00:31:48.567 Flexible Data Placement Supported: Not Supported 00:31:48.567 00:31:48.567 Controller Memory Buffer Support 00:31:48.567 ================================ 00:31:48.567 Supported: No 00:31:48.567 00:31:48.567 Persistent Memory Region Support 00:31:48.567 ================================ 00:31:48.567 Supported: No 00:31:48.567 00:31:48.567 Admin Command Set Attributes 00:31:48.567 ============================ 00:31:48.567 Security Send/Receive: Not Supported 00:31:48.567 Format NVM: Not Supported 00:31:48.567 Firmware Activate/Download: Not Supported 00:31:48.567 Namespace Management: Not Supported 00:31:48.567 Device Self-Test: Not Supported 00:31:48.567 Directives: Not Supported 00:31:48.567 NVMe-MI: Not Supported 00:31:48.567 Virtualization Management: Not Supported 00:31:48.567 Doorbell Buffer Config: Not Supported 00:31:48.567 Get LBA Status Capability: Not Supported 00:31:48.567 Command & Feature Lockdown Capability: Not Supported 00:31:48.567 Abort Command Limit: 4 00:31:48.567 Async Event Request Limit: 4 00:31:48.567 Number of Firmware Slots: N/A 00:31:48.567 Firmware Slot 1 Read-Only: N/A 00:31:48.567 Firmware Activation Without Reset: N/A 00:31:48.567 Multiple Update Detection Support: N/A 00:31:48.567 Firmware Update Granularity: No Information Provided 00:31:48.567 Per-Namespace SMART Log: Yes 00:31:48.567 Asymmetric Namespace Access Log Page: Supported 00:31:48.567 ANA Transition Time : 10 sec 00:31:48.567 00:31:48.567 Asymmetric Namespace Access Capabilities 00:31:48.567 ANA Optimized State : Supported 00:31:48.567 ANA Non-Optimized State : Supported 00:31:48.567 ANA Inaccessible State : Supported 00:31:48.567 ANA Persistent Loss State : Supported 00:31:48.567 ANA Change State : Supported 00:31:48.567 ANAGRPID is not changed : No 00:31:48.567 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:48.567 00:31:48.567 ANA Group Identifier Maximum : 128 00:31:48.567 Number of ANA Group Identifiers : 128 00:31:48.567 Max Number of Allowed Namespaces : 1024 00:31:48.567 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:48.567 Command Effects Log Page: Supported 00:31:48.567 Get Log Page Extended Data: Supported 00:31:48.567 Telemetry Log Pages: Not Supported 00:31:48.567 Persistent Event Log Pages: Not Supported 00:31:48.567 Supported Log Pages Log Page: May Support 00:31:48.567 Commands Supported & Effects Log Page: Not Supported 00:31:48.567 Feature Identifiers & Effects Log Page:May Support 00:31:48.567 NVMe-MI Commands & Effects Log Page: May Support 00:31:48.567 Data Area 4 for Telemetry Log: Not Supported 00:31:48.567 Error Log Page Entries Supported: 128 00:31:48.567 Keep Alive: Supported 00:31:48.567 Keep Alive Granularity: 1000 ms 00:31:48.567 00:31:48.567 NVM Command Set Attributes 00:31:48.567 ========================== 00:31:48.567 Submission Queue Entry Size 00:31:48.567 Max: 64 00:31:48.567 Min: 64 00:31:48.567 Completion Queue Entry Size 00:31:48.567 Max: 16 00:31:48.567 Min: 16 00:31:48.567 Number of Namespaces: 1024 00:31:48.567 Compare Command: Not Supported 00:31:48.567 Write Uncorrectable Command: Not Supported 00:31:48.567 Dataset Management Command: Supported 00:31:48.567 Write Zeroes Command: Supported 00:31:48.567 Set Features Save Field: Not Supported 00:31:48.567 Reservations: Not Supported 00:31:48.567 Timestamp: Not Supported 00:31:48.567 Copy: Not Supported 00:31:48.567 Volatile Write Cache: Present 00:31:48.567 Atomic Write Unit (Normal): 1 00:31:48.567 Atomic Write Unit (PFail): 1 00:31:48.567 Atomic Compare & Write Unit: 1 00:31:48.567 Fused Compare & Write: Not Supported 00:31:48.567 Scatter-Gather List 00:31:48.567 SGL Command Set: Supported 00:31:48.567 SGL Keyed: Not Supported 00:31:48.567 SGL Bit Bucket Descriptor: Not Supported 00:31:48.567 SGL Metadata Pointer: Not Supported 00:31:48.567 Oversized SGL: Not Supported 00:31:48.567 SGL Metadata Address: Not Supported 00:31:48.567 SGL Offset: Supported 00:31:48.567 Transport SGL Data Block: Not Supported 00:31:48.567 Replay Protected Memory Block: Not Supported 00:31:48.567 00:31:48.567 Firmware Slot Information 00:31:48.567 ========================= 00:31:48.567 Active slot: 0 00:31:48.567 00:31:48.567 Asymmetric Namespace Access 00:31:48.567 =========================== 00:31:48.567 Change Count : 0 00:31:48.567 Number of ANA Group Descriptors : 1 00:31:48.567 ANA Group Descriptor : 0 00:31:48.567 ANA Group ID : 1 00:31:48.567 Number of NSID Values : 1 00:31:48.567 Change Count : 0 00:31:48.567 ANA State : 1 00:31:48.567 Namespace Identifier : 1 00:31:48.567 00:31:48.567 Commands Supported and Effects 00:31:48.567 ============================== 00:31:48.567 Admin Commands 00:31:48.567 -------------- 00:31:48.567 Get Log Page (02h): Supported 00:31:48.567 Identify (06h): Supported 00:31:48.567 Abort (08h): Supported 00:31:48.567 Set Features (09h): Supported 00:31:48.567 Get Features (0Ah): Supported 00:31:48.567 Asynchronous Event Request (0Ch): Supported 00:31:48.567 Keep Alive (18h): Supported 00:31:48.567 I/O Commands 00:31:48.567 ------------ 00:31:48.567 Flush (00h): Supported 00:31:48.567 Write (01h): Supported LBA-Change 00:31:48.567 Read (02h): Supported 00:31:48.568 Write Zeroes (08h): Supported LBA-Change 00:31:48.568 Dataset Management (09h): Supported 00:31:48.568 00:31:48.568 Error Log 00:31:48.568 ========= 00:31:48.568 Entry: 0 00:31:48.568 Error Count: 0x3 00:31:48.568 Submission Queue Id: 0x0 00:31:48.568 Command Id: 0x5 00:31:48.568 Phase Bit: 0 00:31:48.568 Status Code: 0x2 00:31:48.568 Status Code Type: 0x0 00:31:48.568 Do Not Retry: 1 00:31:48.568 Error Location: 0x28 00:31:48.568 LBA: 0x0 00:31:48.568 Namespace: 0x0 00:31:48.568 Vendor Log Page: 0x0 00:31:48.568 ----------- 00:31:48.568 Entry: 1 00:31:48.568 Error Count: 0x2 00:31:48.568 Submission Queue Id: 0x0 00:31:48.568 Command Id: 0x5 00:31:48.568 Phase Bit: 0 00:31:48.568 Status Code: 0x2 00:31:48.568 Status Code Type: 0x0 00:31:48.568 Do Not Retry: 1 00:31:48.568 Error Location: 0x28 00:31:48.568 LBA: 0x0 00:31:48.568 Namespace: 0x0 00:31:48.568 Vendor Log Page: 0x0 00:31:48.568 ----------- 00:31:48.568 Entry: 2 00:31:48.568 Error Count: 0x1 00:31:48.568 Submission Queue Id: 0x0 00:31:48.568 Command Id: 0x4 00:31:48.568 Phase Bit: 0 00:31:48.568 Status Code: 0x2 00:31:48.568 Status Code Type: 0x0 00:31:48.568 Do Not Retry: 1 00:31:48.568 Error Location: 0x28 00:31:48.568 LBA: 0x0 00:31:48.568 Namespace: 0x0 00:31:48.568 Vendor Log Page: 0x0 00:31:48.568 00:31:48.568 Number of Queues 00:31:48.568 ================ 00:31:48.568 Number of I/O Submission Queues: 128 00:31:48.568 Number of I/O Completion Queues: 128 00:31:48.568 00:31:48.568 ZNS Specific Controller Data 00:31:48.568 ============================ 00:31:48.568 Zone Append Size Limit: 0 00:31:48.568 00:31:48.568 00:31:48.568 Active Namespaces 00:31:48.568 ================= 00:31:48.568 get_feature(0x05) failed 00:31:48.568 Namespace ID:1 00:31:48.568 Command Set Identifier: NVM (00h) 00:31:48.568 Deallocate: Supported 00:31:48.568 Deallocated/Unwritten Error: Not Supported 00:31:48.568 Deallocated Read Value: Unknown 00:31:48.568 Deallocate in Write Zeroes: Not Supported 00:31:48.568 Deallocated Guard Field: 0xFFFF 00:31:48.568 Flush: Supported 00:31:48.568 Reservation: Not Supported 00:31:48.568 Namespace Sharing Capabilities: Multiple Controllers 00:31:48.568 Size (in LBAs): 3750748848 (1788GiB) 00:31:48.568 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:48.568 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:48.568 UUID: 89dea8fc-d719-4f1d-838b-c040cd06faa9 00:31:48.568 Thin Provisioning: Not Supported 00:31:48.568 Per-NS Atomic Units: Yes 00:31:48.568 Atomic Write Unit (Normal): 8 00:31:48.568 Atomic Write Unit (PFail): 8 00:31:48.568 Preferred Write Granularity: 8 00:31:48.568 Atomic Compare & Write Unit: 8 00:31:48.568 Atomic Boundary Size (Normal): 0 00:31:48.568 Atomic Boundary Size (PFail): 0 00:31:48.568 Atomic Boundary Offset: 0 00:31:48.568 NGUID/EUI64 Never Reused: No 00:31:48.568 ANA group ID: 1 00:31:48.568 Namespace Write Protected: No 00:31:48.568 Number of LBA Formats: 1 00:31:48.568 Current LBA Format: LBA Format #00 00:31:48.568 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:48.568 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.568 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.568 rmmod nvme_tcp 00:31:48.848 rmmod nvme_fabrics 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.848 20:45:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:50.791 20:45:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:55.003 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:55.003 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:55.265 00:31:55.265 real 0m20.409s 00:31:55.265 user 0m5.603s 00:31:55.265 sys 0m11.789s 00:31:55.265 20:45:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:55.265 20:45:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:55.265 ************************************ 00:31:55.265 END TEST nvmf_identify_kernel_target 00:31:55.265 ************************************ 00:31:55.265 20:45:11 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:55.265 20:45:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:55.265 20:45:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:55.265 20:45:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:55.265 ************************************ 00:31:55.265 START TEST nvmf_auth 00:31:55.265 ************************************ 00:31:55.265 20:45:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:55.527 * Looking for test storage... 00:31:55.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.527 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.528 20:45:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:03.671 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:03.671 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.671 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:03.672 Found net devices under 0000:31:00.0: cvl_0_0 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:03.672 Found net devices under 0000:31:00.1: cvl_0_1 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:03.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:32:03.672 00:32:03.672 --- 10.0.0.2 ping statistics --- 00:32:03.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.672 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:03.672 00:32:03.672 --- 10.0.0.1 ping statistics --- 00:32:03.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.672 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=3265612 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 3265612 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 3265612 ']' 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:03.672 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=7a98b3247624a1c594e91a6071bef906 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.wfM 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 7a98b3247624a1c594e91a6071bef906 0 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 7a98b3247624a1c594e91a6071bef906 0 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=7a98b3247624a1c594e91a6071bef906 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.wfM 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.wfM 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.wfM 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=4988dc9235bae0ae123b5ee40abba8eaa88402ba37b1114ac16e80806a3f69b6 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.C3h 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 4988dc9235bae0ae123b5ee40abba8eaa88402ba37b1114ac16e80806a3f69b6 3 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 4988dc9235bae0ae123b5ee40abba8eaa88402ba37b1114ac16e80806a3f69b6 3 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=4988dc9235bae0ae123b5ee40abba8eaa88402ba37b1114ac16e80806a3f69b6 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:32:03.934 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.C3h 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.C3h 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.C3h 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.196 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=392155fecac8413f0ceb1bea9354d0486f973658aecb00c7 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.45P 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 392155fecac8413f0ceb1bea9354d0486f973658aecb00c7 0 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 392155fecac8413f0ceb1bea9354d0486f973658aecb00c7 0 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=392155fecac8413f0ceb1bea9354d0486f973658aecb00c7 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.45P 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.45P 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.45P 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=c4fce2bf291a17e3d2ea62f8e3adba43c24317e815004c6d 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.Pn8 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key c4fce2bf291a17e3d2ea62f8e3adba43c24317e815004c6d 2 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 c4fce2bf291a17e3d2ea62f8e3adba43c24317e815004c6d 2 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=c4fce2bf291a17e3d2ea62f8e3adba43c24317e815004c6d 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:32:04.197 20:45:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.Pn8 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.Pn8 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.Pn8 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=9960fcc37484d2c6a4d853afe084ca14 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.Hvw 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 9960fcc37484d2c6a4d853afe084ca14 1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 9960fcc37484d2c6a4d853afe084ca14 1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=9960fcc37484d2c6a4d853afe084ca14 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.Hvw 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.Hvw 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.Hvw 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=960295d2c157034f786cac2fa962d9c7 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.FNo 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 960295d2c157034f786cac2fa962d9c7 1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 960295d2c157034f786cac2fa962d9c7 1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=960295d2c157034f786cac2fa962d9c7 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:32:04.197 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.459 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.FNo 00:32:04.459 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.FNo 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.FNo 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d84df246e86a104c20b1c34a492b3c34ccd2d49a027b4f9c 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.pYI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d84df246e86a104c20b1c34a492b3c34ccd2d49a027b4f9c 2 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d84df246e86a104c20b1c34a492b3c34ccd2d49a027b4f9c 2 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d84df246e86a104c20b1c34a492b3c34ccd2d49a027b4f9c 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.pYI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.pYI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.pYI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=870d75f13be14a9a794edb06f1405b48 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.GGI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 870d75f13be14a9a794edb06f1405b48 0 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 870d75f13be14a9a794edb06f1405b48 0 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=870d75f13be14a9a794edb06f1405b48 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.GGI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.GGI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.GGI 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=592c78a9c9122170841ebf3e725b755ab7e6d0dbf25686ea0081a1258901b851 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.ZF6 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 592c78a9c9122170841ebf3e725b755ab7e6d0dbf25686ea0081a1258901b851 3 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 592c78a9c9122170841ebf3e725b755ab7e6d0dbf25686ea0081a1258901b851 3 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=592c78a9c9122170841ebf3e725b755ab7e6d0dbf25686ea0081a1258901b851 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.ZF6 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.ZF6 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.ZF6 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 3265612 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 3265612 ']' 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:04.460 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wfM 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.C3h ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.C3h 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.45P 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.Pn8 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pn8 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Hvw 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.FNo ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FNo 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.pYI 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.GGI ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GGI 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZF6 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:04.723 20:45:20 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:08.026 Waiting for block devices as requested 00:32:08.026 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:08.026 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:08.287 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:08.287 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:08.287 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:08.547 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:08.547 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:08.547 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:08.808 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:08.808 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:08.808 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:09.069 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:09.069 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:09.069 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:09.331 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:09.331 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:09.331 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:10.275 No valid GPT data, bailing 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:10.275 00:32:10.275 Discovery Log Number of Records 2, Generation counter 2 00:32:10.275 =====Discovery Log Entry 0====== 00:32:10.275 trtype: tcp 00:32:10.275 adrfam: ipv4 00:32:10.275 subtype: current discovery subsystem 00:32:10.275 treq: not specified, sq flow control disable supported 00:32:10.275 portid: 1 00:32:10.275 trsvcid: 4420 00:32:10.275 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:10.275 traddr: 10.0.0.1 00:32:10.275 eflags: none 00:32:10.275 sectype: none 00:32:10.275 =====Discovery Log Entry 1====== 00:32:10.275 trtype: tcp 00:32:10.275 adrfam: ipv4 00:32:10.275 subtype: nvme subsystem 00:32:10.275 treq: not specified, sq flow control disable supported 00:32:10.275 portid: 1 00:32:10.275 trsvcid: 4420 00:32:10.275 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:10.275 traddr: 10.0.0.1 00:32:10.275 eflags: none 00:32:10.275 sectype: none 00:32:10.275 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 nvme0n1 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.535 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.796 nvme0n1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.796 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.057 nvme0n1 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.057 20:45:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.318 nvme0n1 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.318 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.319 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.580 nvme0n1 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.580 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.842 nvme0n1 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.842 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.103 nvme0n1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.104 20:45:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.366 nvme0n1 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.366 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.628 nvme0n1 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.628 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.889 nvme0n1 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.889 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.151 nvme0n1 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.151 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.152 20:45:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.414 nvme0n1 00:32:13.414 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.414 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.415 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.676 nvme0n1 00:32:13.676 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.676 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.676 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.676 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.676 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:13.676 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.936 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.198 nvme0n1 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.198 20:45:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.198 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.460 nvme0n1 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.460 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.722 nvme0n1 00:32:14.722 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.722 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.722 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:14.722 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.722 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.984 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.984 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.985 20:45:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:15.558 nvme0n1 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.558 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.559 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:15.820 nvme0n1 00:32:15.820 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.820 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.820 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.820 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:15.820 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:15.820 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.081 20:45:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.342 nvme0n1 00:32:16.342 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.342 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.342 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.342 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.342 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:16.342 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.604 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.865 nvme0n1 00:32:16.865 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.865 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.865 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.865 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:16.865 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:16.865 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.127 20:45:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:17.699 nvme0n1 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:17.699 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.700 20:45:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:18.276 nvme0n1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.276 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:19.218 nvme0n1 00:32:19.218 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.218 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.218 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.218 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:19.218 20:45:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:19.218 20:45:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.218 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.163 nvme0n1 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.163 20:45:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.735 nvme0n1 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:20.735 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.736 20:45:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.678 nvme0n1 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.678 nvme0n1 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.678 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.939 nvme0n1 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:21.939 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.200 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.201 20:45:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.201 nvme0n1 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.201 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.462 nvme0n1 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.462 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.463 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.724 nvme0n1 00:32:22.724 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.724 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.725 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.986 nvme0n1 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.986 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.987 20:45:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.249 nvme0n1 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.249 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.510 nvme0n1 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.510 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.511 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.772 nvme0n1 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.772 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.033 nvme0n1 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.033 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.034 20:45:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.649 nvme0n1 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.649 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.650 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 nvme0n1 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.939 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.206 nvme0n1 00:32:25.206 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.206 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.206 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.206 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.206 20:45:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:25.206 20:45:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.206 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.468 nvme0n1 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.468 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.729 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.991 nvme0n1 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.991 20:45:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:26.563 nvme0n1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.563 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.135 nvme0n1 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:27.135 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.136 20:45:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.397 nvme0n1 00:32:27.397 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.397 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.397 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:27.397 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.397 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.659 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:27.928 nvme0n1 00:32:27.928 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.195 20:45:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.768 nvme0n1 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.768 20:45:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:29.342 nvme0n1 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.342 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.603 20:45:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:30.176 nvme0n1 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.176 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:30.437 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.438 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:31.011 nvme0n1 00:32:31.011 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.011 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.011 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:31.011 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.011 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:31.011 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.272 20:45:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:31.846 nvme0n1 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.846 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.107 20:45:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.681 nvme0n1 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.681 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.943 nvme0n1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.943 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.204 nvme0n1 00:32:33.204 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.204 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.204 20:45:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:33.204 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.204 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.204 20:45:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:33.204 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.205 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.467 nvme0n1 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.467 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.729 nvme0n1 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.729 nvme0n1 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.729 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:33.990 nvme0n1 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.990 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.250 20:45:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.250 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.538 nvme0n1 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.538 nvme0n1 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.538 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.799 nvme0n1 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:34.799 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:32:35.059 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.060 nvme0n1 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.060 20:45:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.320 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.581 nvme0n1 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:35.581 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.582 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.843 nvme0n1 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.843 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.104 nvme0n1 00:32:36.104 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.104 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.104 20:45:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:36.104 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.104 20:45:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.104 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.104 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.104 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.104 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.104 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.366 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.628 nvme0n1 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.628 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.889 nvme0n1 00:32:36.889 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.889 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.890 20:45:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:37.462 nvme0n1 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.462 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.463 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.035 nvme0n1 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.035 20:45:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.607 nvme0n1 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.607 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.608 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.179 nvme0n1 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.179 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.180 20:45:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.441 nvme0n1 00:32:39.441 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.441 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.441 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:39.441 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.441 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:N2E5OGIzMjQ3NjI0YTFjNTk0ZTkxYTYwNzFiZWY5MDbaX6Ax: 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NDk4OGRjOTIzNWJhZTBhZTEyM2I1ZWU0MGFiYmE4ZWFhODg0MDJiYTM3YjExMTRhYzE2ZTgwODA2YTNmNjliNh/LPz0=: 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.702 20:45:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:40.274 nvme0n1 00:32:40.274 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.274 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.274 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:40.274 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.274 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:40.274 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:40.535 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.536 20:45:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:41.107 nvme0n1 00:32:41.107 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.107 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.107 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:41.107 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.107 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:41.107 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:OTk2MGZjYzM3NDg0ZDJjNmE0ZDg1M2FmZTA4NGNhMTRGJt0L: 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: ]] 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OTYwMjk1ZDJjMTU3MDM0Zjc4NmNhYzJmYTk2MmQ5YzeveKuh: 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.369 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.370 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:41.942 nvme0n1 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.942 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0ZGYyNDZlODZhMTA0YzIwYjFjMzRhNDkyYjNjMzRjY2QyZDQ5YTAyN2I0Zjljab97wA==: 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: ]] 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ODcwZDc1ZjEzYmUxNGE5YTc5NGVkYjA2ZjE0MDViNDgkGLux: 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.203 20:45:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:42.776 nvme0n1 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.776 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:NTkyYzc4YTljOTEyMjE3MDg0MWViZjNlNzI1Yjc1NWFiN2U2ZDBkYmYyNTY4NmVhMDA4MWExMjU4OTAxYjg1MbJyWd4=: 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:43.037 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.038 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:43.038 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:43.038 20:45:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:43.038 20:45:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.038 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.038 20:45:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.609 nvme0n1 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.609 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MzkyMTU1ZmVjYWM4NDEzZjBjZWIxYmVhOTM1NGQwNDg2Zjk3MzY1OGFlY2IwMGM36WdMZA==: 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzRmY2UyYmYyOTFhMTdlM2QyZWE2MmY4ZTNhZGJhNDNjMjQzMTdlODE1MDA0YzZkJRw6Ug==: 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.871 request: 00:32:43.871 { 00:32:43.871 "name": "nvme0", 00:32:43.871 "trtype": "tcp", 00:32:43.871 "traddr": "10.0.0.1", 00:32:43.871 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:43.871 "adrfam": "ipv4", 00:32:43.871 "trsvcid": "4420", 00:32:43.871 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:43.871 "method": "bdev_nvme_attach_controller", 00:32:43.871 "req_id": 1 00:32:43.871 } 00:32:43.871 Got JSON-RPC error response 00:32:43.871 response: 00:32:43.871 { 00:32:43.871 "code": -32602, 00:32:43.871 "message": "Invalid parameters" 00:32:43.871 } 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:43.871 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.872 request: 00:32:43.872 { 00:32:43.872 "name": "nvme0", 00:32:43.872 "trtype": "tcp", 00:32:43.872 "traddr": "10.0.0.1", 00:32:43.872 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:43.872 "adrfam": "ipv4", 00:32:43.872 "trsvcid": "4420", 00:32:43.872 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:43.872 "dhchap_key": "key2", 00:32:43.872 "method": "bdev_nvme_attach_controller", 00:32:43.872 "req_id": 1 00:32:43.872 } 00:32:43.872 Got JSON-RPC error response 00:32:43.872 response: 00:32:43.872 { 00:32:43.872 "code": -32602, 00:32:43.872 "message": "Invalid parameters" 00:32:43.872 } 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:43.872 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:44.133 request: 00:32:44.133 { 00:32:44.133 "name": "nvme0", 00:32:44.133 "trtype": "tcp", 00:32:44.133 "traddr": "10.0.0.1", 00:32:44.133 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:44.133 "adrfam": "ipv4", 00:32:44.133 "trsvcid": "4420", 00:32:44.133 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:44.133 "dhchap_key": "key1", 00:32:44.133 "dhchap_ctrlr_key": "ckey2", 00:32:44.133 "method": "bdev_nvme_attach_controller", 00:32:44.133 "req_id": 1 00:32:44.133 } 00:32:44.133 Got JSON-RPC error response 00:32:44.133 response: 00:32:44.133 { 00:32:44.133 "code": -32602, 00:32:44.133 "message": "Invalid parameters" 00:32:44.133 } 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:44.133 rmmod nvme_tcp 00:32:44.133 rmmod nvme_fabrics 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 3265612 ']' 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 3265612 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 3265612 ']' 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 3265612 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:44.133 20:45:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3265612 00:32:44.133 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:44.133 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:44.133 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3265612' 00:32:44.133 killing process with pid 3265612 00:32:44.133 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 3265612 00:32:44.133 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 3265612 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:44.394 20:46:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:46.307 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:46.567 20:46:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:50.774 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:50.774 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:50.774 20:46:06 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wfM /tmp/spdk.key-null.45P /tmp/spdk.key-sha256.Hvw /tmp/spdk.key-sha384.pYI /tmp/spdk.key-sha512.ZF6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:50.774 20:46:06 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:54.987 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:54.987 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:54.987 00:32:54.987 real 0m59.532s 00:32:54.987 user 0m51.735s 00:32:54.987 sys 0m16.090s 00:32:54.987 20:46:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:54.987 20:46:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:32:54.987 ************************************ 00:32:54.987 END TEST nvmf_auth 00:32:54.987 ************************************ 00:32:54.987 20:46:10 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:32:54.987 20:46:10 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:54.987 20:46:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:54.987 20:46:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:54.987 20:46:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.987 ************************************ 00:32:54.988 START TEST nvmf_digest 00:32:54.988 ************************************ 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:54.988 * Looking for test storage... 00:32:54.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:54.988 20:46:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.196 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:03.197 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:03.197 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:03.197 Found net devices under 0000:31:00.0: cvl_0_0 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:03.197 Found net devices under 0000:31:00.1: cvl_0_1 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:03.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:33:03.197 00:33:03.197 --- 10.0.0.2 ping statistics --- 00:33:03.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.197 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:33:03.197 00:33:03.197 --- 10.0.0.1 ping statistics --- 00:33:03.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.197 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:03.197 ************************************ 00:33:03.197 START TEST nvmf_digest_clean 00:33:03.197 ************************************ 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3282969 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3282969 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3282969 ']' 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:03.197 20:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.197 [2024-05-13 20:46:18.994761] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:03.197 [2024-05-13 20:46:18.994804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.197 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.197 [2024-05-13 20:46:19.076102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.197 [2024-05-13 20:46:19.139672] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.197 [2024-05-13 20:46:19.139708] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.197 [2024-05-13 20:46:19.139716] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.198 [2024-05-13 20:46:19.139723] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.198 [2024-05-13 20:46:19.139728] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.198 [2024-05-13 20:46:19.139754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:04.142 null0 00:33:04.142 [2024-05-13 20:46:19.866943] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.142 [2024-05-13 20:46:19.890936] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:04.142 [2024-05-13 20:46:19.891172] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3283307 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3283307 /var/tmp/bperf.sock 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3283307 ']' 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:04.142 20:46:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:04.142 [2024-05-13 20:46:19.939993] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:04.142 [2024-05-13 20:46:19.940042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283307 ] 00:33:04.142 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.142 [2024-05-13 20:46:20.023561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.402 [2024-05-13 20:46:20.090178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.973 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:04.973 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:04.973 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:04.973 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:04.973 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:05.233 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.233 20:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.493 nvme0n1 00:33:05.493 20:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:05.493 20:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.493 Running I/O for 2 seconds... 00:33:08.051 00:33:08.051 Latency(us) 00:33:08.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.051 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:08.051 nvme0n1 : 2.00 21013.78 82.09 0.00 0.00 6083.96 2908.16 14854.83 00:33:08.051 =================================================================================================================== 00:33:08.051 Total : 21013.78 82.09 0.00 0.00 6083.96 2908.16 14854.83 00:33:08.051 0 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:08.051 | select(.opcode=="crc32c") 00:33:08.051 | "\(.module_name) \(.executed)"' 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3283307 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3283307 ']' 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3283307 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3283307 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3283307' 00:33:08.051 killing process with pid 3283307 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3283307 00:33:08.051 Received shutdown signal, test time was about 2.000000 seconds 00:33:08.051 00:33:08.051 Latency(us) 00:33:08.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.051 =================================================================================================================== 00:33:08.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3283307 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3283998 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3283998 /var/tmp/bperf.sock 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3283998 ']' 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:08.051 20:46:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:08.051 [2024-05-13 20:46:23.805887] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:08.051 [2024-05-13 20:46:23.805939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283998 ] 00:33:08.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.051 Zero copy mechanism will not be used. 00:33:08.051 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.051 [2024-05-13 20:46:23.887863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.051 [2024-05-13 20:46:23.941265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.992 20:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.253 nvme0n1 00:33:09.253 20:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:09.253 20:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.514 Zero copy mechanism will not be used. 00:33:09.514 Running I/O for 2 seconds... 00:33:11.425 00:33:11.425 Latency(us) 00:33:11.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.425 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:11.425 nvme0n1 : 2.01 2906.44 363.30 0.00 0.00 5500.50 1522.35 11632.64 00:33:11.426 =================================================================================================================== 00:33:11.426 Total : 2906.44 363.30 0.00 0.00 5500.50 1522.35 11632.64 00:33:11.426 0 00:33:11.426 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:11.426 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:11.426 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:11.426 | select(.opcode=="crc32c") 00:33:11.426 | "\(.module_name) \(.executed)"' 00:33:11.426 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:11.426 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3283998 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3283998 ']' 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3283998 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3283998 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3283998' 00:33:11.686 killing process with pid 3283998 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3283998 00:33:11.686 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.686 00:33:11.686 Latency(us) 00:33:11.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.686 =================================================================================================================== 00:33:11.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3283998 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3284681 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3284681 /var/tmp/bperf.sock 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3284681 ']' 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:11.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:11.686 20:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:11.946 [2024-05-13 20:46:27.636834] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:11.946 [2024-05-13 20:46:27.636883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284681 ] 00:33:11.946 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.946 [2024-05-13 20:46:27.718467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.946 [2024-05-13 20:46:27.771728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.516 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:12.516 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:12.516 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:12.516 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:12.516 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:12.777 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.777 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.037 nvme0n1 00:33:13.037 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:13.037 20:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:13.297 Running I/O for 2 seconds... 00:33:15.206 00:33:15.206 Latency(us) 00:33:15.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.206 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.206 nvme0n1 : 2.00 22488.87 87.85 0.00 0.00 5683.87 2198.19 14745.60 00:33:15.206 =================================================================================================================== 00:33:15.206 Total : 22488.87 87.85 0.00 0.00 5683.87 2198.19 14745.60 00:33:15.206 0 00:33:15.206 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:15.206 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:15.206 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:15.206 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:15.206 | select(.opcode=="crc32c") 00:33:15.206 | "\(.module_name) \(.executed)"' 00:33:15.206 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3284681 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3284681 ']' 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3284681 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3284681 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3284681' 00:33:15.477 killing process with pid 3284681 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3284681 00:33:15.477 Received shutdown signal, test time was about 2.000000 seconds 00:33:15.477 00:33:15.477 Latency(us) 00:33:15.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.477 =================================================================================================================== 00:33:15.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3284681 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3285370 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3285370 /var/tmp/bperf.sock 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3285370 ']' 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:15.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:15.477 20:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:15.477 [2024-05-13 20:46:31.407611] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:15.477 [2024-05-13 20:46:31.407666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285370 ] 00:33:15.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:15.477 Zero copy mechanism will not be used. 00:33:15.737 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.737 [2024-05-13 20:46:31.487378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.737 [2024-05-13 20:46:31.540516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.306 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:16.306 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:16.306 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:16.306 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:16.306 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:16.568 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.568 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.828 nvme0n1 00:33:17.088 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:17.088 20:46:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:17.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:17.088 Zero copy mechanism will not be used. 00:33:17.088 Running I/O for 2 seconds... 00:33:19.000 00:33:19.000 Latency(us) 00:33:19.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.000 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:19.000 nvme0n1 : 2.00 3509.98 438.75 0.00 0.00 4551.41 1672.53 11523.41 00:33:19.000 =================================================================================================================== 00:33:19.000 Total : 3509.98 438.75 0.00 0.00 4551.41 1672.53 11523.41 00:33:19.000 0 00:33:19.000 20:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:19.000 20:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:19.000 20:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:19.000 20:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:19.000 20:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:19.000 | select(.opcode=="crc32c") 00:33:19.000 | "\(.module_name) \(.executed)"' 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3285370 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3285370 ']' 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3285370 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3285370 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3285370' 00:33:19.261 killing process with pid 3285370 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3285370 00:33:19.261 Received shutdown signal, test time was about 2.000000 seconds 00:33:19.261 00:33:19.261 Latency(us) 00:33:19.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.261 =================================================================================================================== 00:33:19.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.261 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3285370 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3282969 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3282969 ']' 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3282969 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3282969 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3282969' 00:33:19.522 killing process with pid 3282969 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3282969 00:33:19.522 [2024-05-13 20:46:35.271581] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3282969 00:33:19.522 00:33:19.522 real 0m16.467s 00:33:19.522 user 0m32.219s 00:33:19.522 sys 0m3.331s 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.522 ************************************ 00:33:19.522 END TEST nvmf_digest_clean 00:33:19.522 ************************************ 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:19.522 20:46:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.783 ************************************ 00:33:19.783 START TEST nvmf_digest_error 00:33:19.783 ************************************ 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3286240 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3286240 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3286240 ']' 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:19.783 20:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.783 [2024-05-13 20:46:35.547251] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:19.783 [2024-05-13 20:46:35.547300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.783 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.783 [2024-05-13 20:46:35.622620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.783 [2024-05-13 20:46:35.688584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.783 [2024-05-13 20:46:35.688620] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.783 [2024-05-13 20:46:35.688627] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.783 [2024-05-13 20:46:35.688634] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.783 [2024-05-13 20:46:35.688639] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.783 [2024-05-13 20:46:35.688658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.726 [2024-05-13 20:46:36.366583] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.726 null0 00:33:20.726 [2024-05-13 20:46:36.443140] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.726 [2024-05-13 20:46:36.467132] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:20.726 [2024-05-13 20:46:36.467394] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3286424 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3286424 /var/tmp/bperf.sock 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3286424 ']' 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.726 20:46:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.726 [2024-05-13 20:46:36.519139] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:20.726 [2024-05-13 20:46:36.519187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286424 ] 00:33:20.726 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.726 [2024-05-13 20:46:36.598725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.726 [2024-05-13 20:46:36.653260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.667 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.927 nvme0n1 00:33:21.927 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:21.927 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.927 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:21.927 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.927 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:21.927 20:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.927 Running I/O for 2 seconds... 00:33:22.188 [2024-05-13 20:46:37.874481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.188 [2024-05-13 20:46:37.874513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.188 [2024-05-13 20:46:37.874521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.188 [2024-05-13 20:46:37.886386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.188 [2024-05-13 20:46:37.886405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.886413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.899659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.899677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.899684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.910153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.910171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.910178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.922512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.922529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.922535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.934808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.934829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.934836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.946446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.946463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.946469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.958748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.958766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.958772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.970614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.970631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.970637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.982560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.982577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.982584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:37.994948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:37.994964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:37.994971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.006483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.006499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.006506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.018720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.018737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.018743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.031595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.031611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.042135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.042152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.042158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.054118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.054134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.054140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.066699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.066715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.066721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.077875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.077891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.077897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.090254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.090271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.090277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.101611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.101634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.114621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.114637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.114643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.189 [2024-05-13 20:46:38.126729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.189 [2024-05-13 20:46:38.126747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.189 [2024-05-13 20:46:38.126754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.137987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.138007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.138013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.149801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.149818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.149824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.161641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.161657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.161663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.173700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.173717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.173723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.185830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.185853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.197468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.197484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.197490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.209326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.209342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.209348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.221700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.221716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.221722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.233324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.233340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.233346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.245445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.245461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.245467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.257616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.257639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.269860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.269876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.269882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.451 [2024-05-13 20:46:38.280976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.451 [2024-05-13 20:46:38.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.451 [2024-05-13 20:46:38.280998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.293506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.293522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.293528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.304333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.304349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.304355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.318379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.318396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.318401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.328712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.328728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.328734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.340349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.340365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.340375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.352288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.352304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.352310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.365402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.365419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.365425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.377281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.377297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.377303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.452 [2024-05-13 20:46:38.389022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.452 [2024-05-13 20:46:38.389038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.452 [2024-05-13 20:46:38.389045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.400725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.400741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.400747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.412542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.412559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.412565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.423632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.423647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.423654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.436196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.436213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.436219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.447839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.447859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.447865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.460049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.460066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.460072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.472388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.472405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.472411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.485708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.485724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.485731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.496131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.496148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.496155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.508153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.508169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.508176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.519675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.519697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.532348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.532365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.544726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.544742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.544748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.556999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.557016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.557022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.568452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.568468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.568475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.580572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.580589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.580595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.592292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.592308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.592316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.603590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.603606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.615845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.615861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.615867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.628131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.628148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.628154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.641109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.641126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.641132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.713 [2024-05-13 20:46:38.651250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.713 [2024-05-13 20:46:38.651270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.713 [2024-05-13 20:46:38.651277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.664439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.664456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.664462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.676376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.676393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.676399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.688779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.688796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.688803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.699768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.699785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.699791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.711401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.711418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.711426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.723974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.723992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.723998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.736419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.736436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.736442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.747268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.747284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.747290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.759505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.759528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.772120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.772137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.772144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.784140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.784157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.784163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.796117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.796134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.807526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.807543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.807550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.819285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.819302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.819308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.831673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.831690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.831696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.844621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.844638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.844644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.856819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.856836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.856845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.867049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.867065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.974 [2024-05-13 20:46:38.867071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.974 [2024-05-13 20:46:38.880607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.974 [2024-05-13 20:46:38.880624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.975 [2024-05-13 20:46:38.880630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.975 [2024-05-13 20:46:38.892475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.975 [2024-05-13 20:46:38.892492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.975 [2024-05-13 20:46:38.892498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.975 [2024-05-13 20:46:38.905508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.975 [2024-05-13 20:46:38.905525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.975 [2024-05-13 20:46:38.905532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.975 [2024-05-13 20:46:38.916962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:22.975 [2024-05-13 20:46:38.916979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.975 [2024-05-13 20:46:38.916985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.927692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.927709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.927715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.939011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.939029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.939035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.952119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.952136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.952142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.963548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.963568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.963574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.976667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.976684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.976690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.988450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.988467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.988473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:38.999871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:38.999888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:38.999894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:39.010998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:39.011015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:39.011021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:39.024307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:39.024329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:39.024335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.236 [2024-05-13 20:46:39.034673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.236 [2024-05-13 20:46:39.034689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.236 [2024-05-13 20:46:39.034696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.046740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.046757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.046763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.057887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.057904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.057911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.071701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.071718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.071725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.083782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.083798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.083804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.094157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.094174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.094180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.107813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.107829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.107836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.120201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.120217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.120223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.130267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.130284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.130290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.142877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.142895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.142903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.154088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.154106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.154112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.166569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.166586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.166596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.237 [2024-05-13 20:46:39.178629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.237 [2024-05-13 20:46:39.178645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.237 [2024-05-13 20:46:39.178651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.189914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.189931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.189937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.202177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.202194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.202200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.214088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.214105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.214111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.226166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.226182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.226188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.238083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.238100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.238106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.249841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.249858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.249864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.261767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.261784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.497 [2024-05-13 20:46:39.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.497 [2024-05-13 20:46:39.273867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.497 [2024-05-13 20:46:39.273884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.273891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.286271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.286288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.286294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.298862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.298880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.298887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.309987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.310003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.310010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.322997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.323013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.323019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.334204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.334221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.334227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.346383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.346400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.346406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.358910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.358927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.358933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.370899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.370915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.370925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.381496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.381512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.381518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.394475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.394492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.394498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.407260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.407277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.407283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.417465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.417481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.498 [2024-05-13 20:46:39.430361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.498 [2024-05-13 20:46:39.430377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.498 [2024-05-13 20:46:39.430383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.441630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.441647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.441653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.454609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.454625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.454631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.464373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.464390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.464396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.477826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.477846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.477852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.488911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.488928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.488934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.500852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.500869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.500874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.515184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.515201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.515207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.524783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.524799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.524805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.539074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.539091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.539097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.551194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.551211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.551218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.560192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.560215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.573602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.573618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.573624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.587953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.587970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.587976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.600114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.600130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.600136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.610793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.610809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.610815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.759 [2024-05-13 20:46:39.622801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.759 [2024-05-13 20:46:39.622817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.759 [2024-05-13 20:46:39.622823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-05-13 20:46:39.635355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.760 [2024-05-13 20:46:39.635372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-05-13 20:46:39.635379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-05-13 20:46:39.647185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.760 [2024-05-13 20:46:39.647202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-05-13 20:46:39.647208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-05-13 20:46:39.658897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.760 [2024-05-13 20:46:39.658913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-05-13 20:46:39.658919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-05-13 20:46:39.670801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.760 [2024-05-13 20:46:39.670817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-05-13 20:46:39.670823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-05-13 20:46:39.683684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.760 [2024-05-13 20:46:39.683700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-05-13 20:46:39.683709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:23.760 [2024-05-13 20:46:39.695607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:23.760 [2024-05-13 20:46:39.695624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:23.760 [2024-05-13 20:46:39.695630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.706735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.021 [2024-05-13 20:46:39.706752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.021 [2024-05-13 20:46:39.706758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.718452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.021 [2024-05-13 20:46:39.718468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.021 [2024-05-13 20:46:39.718474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.730528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.021 [2024-05-13 20:46:39.730545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.021 [2024-05-13 20:46:39.730551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.742145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.021 [2024-05-13 20:46:39.742162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.021 [2024-05-13 20:46:39.742168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.754787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.021 [2024-05-13 20:46:39.754803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.021 [2024-05-13 20:46:39.754809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.767882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.021 [2024-05-13 20:46:39.767899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.021 [2024-05-13 20:46:39.767905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.021 [2024-05-13 20:46:39.778679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.778695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.778701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 [2024-05-13 20:46:39.788994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.789014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.789020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 [2024-05-13 20:46:39.801960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.801977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.801983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 [2024-05-13 20:46:39.815458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.815475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.815481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 [2024-05-13 20:46:39.827835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.827851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.827858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 [2024-05-13 20:46:39.840408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.840425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.840432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 [2024-05-13 20:46:39.850963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180eb70) 00:33:24.022 [2024-05-13 20:46:39.850979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.022 [2024-05-13 20:46:39.850985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.022 00:33:24.022 Latency(us) 00:33:24.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.022 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:24.022 nvme0n1 : 2.00 21225.97 82.91 0.00 0.00 6024.11 2908.16 16493.23 00:33:24.022 =================================================================================================================== 00:33:24.022 Total : 21225.97 82.91 0.00 0.00 6024.11 2908.16 16493.23 00:33:24.022 0 00:33:24.022 20:46:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:24.022 20:46:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:24.022 20:46:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:24.022 | .driver_specific 00:33:24.022 | .nvme_error 00:33:24.022 | .status_code 00:33:24.022 | .command_transient_transport_error' 00:33:24.022 20:46:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3286424 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3286424 ']' 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3286424 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3286424 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3286424' 00:33:24.283 killing process with pid 3286424 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3286424 00:33:24.283 Received shutdown signal, test time was about 2.000000 seconds 00:33:24.283 00:33:24.283 Latency(us) 00:33:24.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.283 =================================================================================================================== 00:33:24.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3286424 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3287108 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3287108 /var/tmp/bperf.sock 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3287108 ']' 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:24.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.283 20:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.543 [2024-05-13 20:46:40.268502] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:24.543 [2024-05-13 20:46:40.268588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287108 ] 00:33:24.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:24.543 Zero copy mechanism will not be used. 00:33:24.543 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.543 [2024-05-13 20:46:40.352639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.543 [2024-05-13 20:46:40.405273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.113 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.113 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:25.113 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:25.113 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:25.373 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:25.373 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.373 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.373 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.373 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.373 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.633 nvme0n1 00:33:25.894 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:25.894 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.894 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.894 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.894 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:25.894 20:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.894 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.894 Zero copy mechanism will not be used. 00:33:25.894 Running I/O for 2 seconds... 00:33:25.894 [2024-05-13 20:46:41.688125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.688158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.688166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.699153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.699175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.699182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.712434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.712463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.712470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.727106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.727125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.727136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.740572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.740591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.740597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.754269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.754289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.754295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.764510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.764529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.764535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.778065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.778083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.778089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.790874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.790891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.790898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.802223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.802242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.812808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.812826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.894 [2024-05-13 20:46:41.812832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.894 [2024-05-13 20:46:41.823614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.894 [2024-05-13 20:46:41.823633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.895 [2024-05-13 20:46:41.823639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.895 [2024-05-13 20:46:41.834411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:25.895 [2024-05-13 20:46:41.834434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.895 [2024-05-13 20:46:41.834440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.844794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.844813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.844819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.855886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.855904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.855910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.865897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.865915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.865921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.876954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.876973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.876979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.888219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.888237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.888243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.899333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.899358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.910295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.910323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.922605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.922624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.922630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.932161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.932180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.932186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.942596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.942615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.953601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.953619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.953626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.964772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.964790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.964796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.976813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.976831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.976838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.155 [2024-05-13 20:46:41.986980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.155 [2024-05-13 20:46:41.986998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.155 [2024-05-13 20:46:41.987004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:41.997901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:41.997920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:41.997926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.008928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.008946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.008952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.020657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.020676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.020686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.031725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.031743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.031749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.042556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.042574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.042580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.054713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.054732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.054738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.066300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.066324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.066330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.077596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.077615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.077621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.156 [2024-05-13 20:46:42.089007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.156 [2024-05-13 20:46:42.089025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.156 [2024-05-13 20:46:42.089032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.099988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.100006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.100012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.111194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.111213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.111220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.122641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.122659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.122665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.131147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.131165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.131171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.142651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.142669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.142676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.153582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.153600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.153606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.165453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.165471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.165477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.175738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.416 [2024-05-13 20:46:42.175757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.416 [2024-05-13 20:46:42.175763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.416 [2024-05-13 20:46:42.186604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.186623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.186631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.197437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.197456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.197462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.208506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.208526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.208536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.219348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.219366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.219372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.230490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.230509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.230515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.240120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.240139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.240145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.251413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.251432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.251438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.264246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.264264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.264271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.275777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.275796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.275802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.286605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.286623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.286630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.297559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.297578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.297584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.308028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.308050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.308056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.319183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.319200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.319206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.332816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.332835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.332841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.346689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.346707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.346713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.417 [2024-05-13 20:46:42.358915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.417 [2024-05-13 20:46:42.358934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.417 [2024-05-13 20:46:42.358940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.371090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.371108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.371115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.381574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.381593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.381599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.391635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.391653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.391660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.400623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.400641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.400648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.412246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.412265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.412272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.422649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.422668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.422674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.432437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.432455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.432461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.441968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.441987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.441993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.451876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.451895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.451901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.462039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.462058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.462064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.471502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.471521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.471528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.482268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.482287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.482293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.493397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.493416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.493426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.503849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.503867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.503874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.513910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.513928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.513934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.525019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.525037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.525044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.535520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.535538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.535545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.545598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.545617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.545624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.556946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.556964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.556970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.567309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.567333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.567340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.577536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.577554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.577560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.677 [2024-05-13 20:46:42.588412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.677 [2024-05-13 20:46:42.588434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.677 [2024-05-13 20:46:42.588440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.678 [2024-05-13 20:46:42.600466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.678 [2024-05-13 20:46:42.600484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.678 [2024-05-13 20:46:42.600491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.678 [2024-05-13 20:46:42.610669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.678 [2024-05-13 20:46:42.610687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.678 [2024-05-13 20:46:42.610694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.621931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.621950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.621957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.633144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.633163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.633169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.644730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.644749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.644755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.655528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.655546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.655553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.666533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.666551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.666557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.677609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.677627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.677634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.688252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.688272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.688278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.938 [2024-05-13 20:46:42.699660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.938 [2024-05-13 20:46:42.699679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.938 [2024-05-13 20:46:42.699685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.712489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.712508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.712514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.723830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.723848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.723855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.735411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.735430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.735436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.746427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.746446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.746453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.755509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.755528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.755534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.766804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.766823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.766829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.777853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.777871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.777884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.787936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.787955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.787961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.797332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.797349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.797355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.807735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.807754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.819178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.819197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.819203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.831609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.831627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.831633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.843028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.843046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.843052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.854362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.854381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.854388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.865241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.865260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.865267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.939 [2024-05-13 20:46:42.876108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:26.939 [2024-05-13 20:46:42.876125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.939 [2024-05-13 20:46:42.876132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.886631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.886650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.886656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.895917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.895935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.895941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.906513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.906531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.906538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.917257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.928443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.928461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.928467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.938804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.938822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.938828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.950137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.950156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.950163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.961372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.961390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.961400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.973571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.973590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.973596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.984838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.984856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.984862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:42.995158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:42.995176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:42.995182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.007258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.007277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.007284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.019502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.019520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.019526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.029882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.029900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.029907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.039812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.039831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.039837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.052281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.052299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.052306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.064578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.064603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.064609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.077510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.077534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.088560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.088579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.088585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.098991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.200 [2024-05-13 20:46:43.099009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.200 [2024-05-13 20:46:43.099016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.200 [2024-05-13 20:46:43.109174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.201 [2024-05-13 20:46:43.109193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.201 [2024-05-13 20:46:43.109199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.201 [2024-05-13 20:46:43.120555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.201 [2024-05-13 20:46:43.120573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.201 [2024-05-13 20:46:43.120579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.201 [2024-05-13 20:46:43.130739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.201 [2024-05-13 20:46:43.130757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.201 [2024-05-13 20:46:43.130764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.201 [2024-05-13 20:46:43.140439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.201 [2024-05-13 20:46:43.140457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.201 [2024-05-13 20:46:43.140464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.151523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.151541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.151548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.162475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.162493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.162500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.172700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.172719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.172726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.183838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.183855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.183862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.195182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.195201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.195207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.206235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.206253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.206261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.216772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.216790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.216796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.227140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.227158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.227165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.238287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.238306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.238317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.248379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.248397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.248407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.257081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.257099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.257106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.267518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.267542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.278336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.278354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.278360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.288442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.288460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.288466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.299103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.299122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.299131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.309456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.309475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.309481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.462 [2024-05-13 20:46:43.319981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.462 [2024-05-13 20:46:43.319999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.462 [2024-05-13 20:46:43.320005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.330765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.330783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.330789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.341975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.341996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.342002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.352996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.353021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.362786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.362804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.362810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.373237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.373255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.373261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.383992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.384012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.384018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.463 [2024-05-13 20:46:43.394726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.463 [2024-05-13 20:46:43.394745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.463 [2024-05-13 20:46:43.394751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.405558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.405583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.416854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.416873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.416880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.430525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.430543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.430549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.444127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.444145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.444151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.457068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.457087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.457093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.468624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.468642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.468648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.478920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.478938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.478944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.490851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.490870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.490876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.501493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.501511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.501517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.513834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.513852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.513859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.524513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.524531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.524538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.535857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.535876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.535885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.546229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.546247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.546253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.557443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.557462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.557468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.567776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.567795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.567801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.578947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.578966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.578972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.590911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.590929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.590936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.602331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.602349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.602355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.615604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.615622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.615628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.627178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.627196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.627202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.636550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.636568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.636574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.645781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.645799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.645805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.724 [2024-05-13 20:46:43.657960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.724 [2024-05-13 20:46:43.657979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.724 [2024-05-13 20:46:43.657985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.985 [2024-05-13 20:46:43.668399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c82b00) 00:33:27.985 [2024-05-13 20:46:43.668417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.985 [2024-05-13 20:46:43.668423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.985 00:33:27.985 Latency(us) 00:33:27.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.985 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:27.985 nvme0n1 : 2.04 2744.37 343.05 0.00 0.00 5717.81 1351.68 47185.92 00:33:27.985 =================================================================================================================== 00:33:27.985 Total : 2744.37 343.05 0.00 0.00 5717.81 1351.68 47185.92 00:33:27.985 0 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:27.985 | .driver_specific 00:33:27.985 | .nvme_error 00:33:27.985 | .status_code 00:33:27.985 | .command_transient_transport_error' 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 180 > 0 )) 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3287108 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3287108 ']' 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3287108 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:27.985 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3287108 00:33:28.245 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:28.245 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:28.245 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3287108' 00:33:28.245 killing process with pid 3287108 00:33:28.245 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3287108 00:33:28.245 Received shutdown signal, test time was about 2.000000 seconds 00:33:28.245 00:33:28.245 Latency(us) 00:33:28.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.245 =================================================================================================================== 00:33:28.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.245 20:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3287108 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3287888 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3287888 /var/tmp/bperf.sock 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3287888 ']' 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:28.245 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.245 [2024-05-13 20:46:44.126913] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:28.245 [2024-05-13 20:46:44.126980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287888 ] 00:33:28.245 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.513 [2024-05-13 20:46:44.207430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.513 [2024-05-13 20:46:44.261593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.154 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:29.154 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:29.154 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:29.154 20:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:29.154 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:29.154 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.154 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.154 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.154 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.154 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.415 nvme0n1 00:33:29.415 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:29.415 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.415 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.415 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.415 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:29.415 20:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.676 Running I/O for 2 seconds... 00:33:29.676 [2024-05-13 20:46:45.429717] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f8a50 00:33:29.676 [2024-05-13 20:46:45.430674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.430705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.442345] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5220 00:33:29.676 [2024-05-13 20:46:45.443349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.443369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.453839] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:29.676 [2024-05-13 20:46:45.454836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.454854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.465322] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:29.676 [2024-05-13 20:46:45.466300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.466318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.476792] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0350 00:33:29.676 [2024-05-13 20:46:45.477786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.477802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.488228] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5220 00:33:29.676 [2024-05-13 20:46:45.489317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.489332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.499812] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:29.676 [2024-05-13 20:46:45.500803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.500820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.511220] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:29.676 [2024-05-13 20:46:45.512198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.512215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.524134] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0350 00:33:29.676 [2024-05-13 20:46:45.525854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.525869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.534577] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f4f40 00:33:29.676 [2024-05-13 20:46:45.535934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.535950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.546179] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fb480 00:33:29.676 [2024-05-13 20:46:45.547509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.547524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.557593] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e7c50 00:33:29.676 [2024-05-13 20:46:45.558706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.558722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.568984] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0350 00:33:29.676 [2024-05-13 20:46:45.570113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.580412] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fa3a0 00:33:29.676 [2024-05-13 20:46:45.581559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.581575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.593310] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e7c50 00:33:29.676 [2024-05-13 20:46:45.595187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.595206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.603760] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ef270 00:33:29.676 [2024-05-13 20:46:45.605258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.605274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:29.676 [2024-05-13 20:46:45.615370] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed0b0 00:33:29.676 [2024-05-13 20:46:45.616937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.676 [2024-05-13 20:46:45.616952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.625949] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:29.938 [2024-05-13 20:46:45.627348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.627363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.636367] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fef90 00:33:29.938 [2024-05-13 20:46:45.637419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.637436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.647962] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f92c0 00:33:29.938 [2024-05-13 20:46:45.649009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.649024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.659389] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e8d30 00:33:29.938 [2024-05-13 20:46:45.660413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.660429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.670780] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fef90 00:33:29.938 [2024-05-13 20:46:45.671830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.671845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.682215] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f92c0 00:33:29.938 [2024-05-13 20:46:45.683234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.683249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.693647] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e8d30 00:33:29.938 [2024-05-13 20:46:45.694690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.694707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.705053] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fef90 00:33:29.938 [2024-05-13 20:46:45.706103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.706118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.716481] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f92c0 00:33:29.938 [2024-05-13 20:46:45.717552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.717567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.727876] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e8d30 00:33:29.938 [2024-05-13 20:46:45.728919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.728935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.739306] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fef90 00:33:29.938 [2024-05-13 20:46:45.740353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.740369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.750705] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f92c0 00:33:29.938 [2024-05-13 20:46:45.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.751766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.761227] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f46d0 00:33:29.938 [2024-05-13 20:46:45.762158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.762174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.775703] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fa3a0 00:33:29.938 [2024-05-13 20:46:45.777491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.777507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.786004] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f8e88 00:33:29.938 [2024-05-13 20:46:45.787367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.787383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.798678] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ea248 00:33:29.938 [2024-05-13 20:46:45.800509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.800524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.807837] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e73e0 00:33:29.938 [2024-05-13 20:46:45.809032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.809048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.820168] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e2c28 00:33:29.938 [2024-05-13 20:46:45.821522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.821538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.831352] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e9e10 00:33:29.938 [2024-05-13 20:46:45.832382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.832398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.843233] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f8e88 00:33:29.938 [2024-05-13 20:46:45.844611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.844627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.854637] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7970 00:33:29.938 [2024-05-13 20:46:45.855999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.856016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.865817] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed920 00:33:29.938 [2024-05-13 20:46:45.866854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.866870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:29.938 [2024-05-13 20:46:45.878704] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e1f80 00:33:29.938 [2024-05-13 20:46:45.880527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.938 [2024-05-13 20:46:45.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.887886] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0bc0 00:33:30.199 [2024-05-13 20:46:45.889077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.889095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.900020] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e1f80 00:33:30.199 [2024-05-13 20:46:45.901058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.901074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.911632] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e6738 00:33:30.199 [2024-05-13 20:46:45.912995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.913010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.924319] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6890 00:33:30.199 [2024-05-13 20:46:45.926114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.926129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.933503] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fd640 00:33:30.199 [2024-05-13 20:46:45.934689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.934704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.945591] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6890 00:33:30.199 [2024-05-13 20:46:45.946610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.946627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.958526] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5658 00:33:30.199 [2024-05-13 20:46:45.960337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.960353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.967698] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190df988 00:33:30.199 [2024-05-13 20:46:45.968891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.968907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.980010] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7970 00:33:30.199 [2024-05-13 20:46:45.981359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.981375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:45.992717] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:30.199 [2024-05-13 20:46:45.994556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:45.994572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.002836] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fc998 00:33:30.199 [2024-05-13 20:46:46.004190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.004205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.014004] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e23b8 00:33:30.199 [2024-05-13 20:46:46.015033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.015048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.025642] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e6300 00:33:30.199 [2024-05-13 20:46:46.027007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.027023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.037039] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e0a68 00:33:30.199 [2024-05-13 20:46:46.038397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.038413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.049723] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fef90 00:33:30.199 [2024-05-13 20:46:46.051552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.051567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.058893] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6458 00:33:30.199 [2024-05-13 20:46:46.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.060094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.071236] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7970 00:33:30.199 [2024-05-13 20:46:46.072627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.199 [2024-05-13 20:46:46.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.199 [2024-05-13 20:46:46.081719] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e3d08 00:33:30.199 [2024-05-13 20:46:46.082916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.200 [2024-05-13 20:46:46.082931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.200 [2024-05-13 20:46:46.094046] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7970 00:33:30.200 [2024-05-13 20:46:46.095421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.200 [2024-05-13 20:46:46.095438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.200 [2024-05-13 20:46:46.104511] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:30.200 [2024-05-13 20:46:46.105710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.200 [2024-05-13 20:46:46.105727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.200 [2024-05-13 20:46:46.118142] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fbcf0 00:33:30.200 [2024-05-13 20:46:46.119958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.200 [2024-05-13 20:46:46.119974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.200 [2024-05-13 20:46:46.127288] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e73e0 00:33:30.200 [2024-05-13 20:46:46.128480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.200 [2024-05-13 20:46:46.128495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.200 [2024-05-13 20:46:46.139641] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e8088 00:33:30.200 [2024-05-13 20:46:46.141011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.200 [2024-05-13 20:46:46.141026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.150122] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed4e8 00:33:30.461 [2024-05-13 20:46:46.151320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.151336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.162456] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e8088 00:33:30.461 [2024-05-13 20:46:46.163829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.163845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.173675] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e1b48 00:33:30.461 [2024-05-13 20:46:46.174712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.174728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.186600] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190df550 00:33:30.461 [2024-05-13 20:46:46.188419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.188437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.196169] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6458 00:33:30.461 [2024-05-13 20:46:46.197544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.197559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.206502] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e3d08 00:33:30.461 [2024-05-13 20:46:46.207399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.207414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.220121] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6cc8 00:33:30.461 [2024-05-13 20:46:46.221644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.221661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.229335] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed4e8 00:33:30.461 [2024-05-13 20:46:46.230251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.230266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.242949] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f2d80 00:33:30.461 [2024-05-13 20:46:46.244441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.244457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.252125] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:30.461 [2024-05-13 20:46:46.253028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.253043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.265758] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e27f0 00:33:30.461 [2024-05-13 20:46:46.267268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.461 [2024-05-13 20:46:46.267284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.461 [2024-05-13 20:46:46.274937] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e12d8 00:33:30.462 [2024-05-13 20:46:46.275794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.275809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.287593] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7970 00:33:30.462 [2024-05-13 20:46:46.288947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.288967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.299924] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fd208 00:33:30.462 [2024-05-13 20:46:46.301418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.301434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.309091] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f3a28 00:33:30.462 [2024-05-13 20:46:46.309959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.309975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.320493] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e27f0 00:33:30.462 [2024-05-13 20:46:46.321371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.321386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.334124] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e84c0 00:33:30.462 [2024-05-13 20:46:46.335647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.335662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.343303] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e12d8 00:33:30.462 [2024-05-13 20:46:46.344207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.344222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.356942] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e3d08 00:33:30.462 [2024-05-13 20:46:46.358466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.358481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.366145] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ee190 00:33:30.462 [2024-05-13 20:46:46.367044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.367059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.379742] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6cc8 00:33:30.462 [2024-05-13 20:46:46.381253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.381268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.388930] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190df118 00:33:30.462 [2024-05-13 20:46:46.389820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.389835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.462 [2024-05-13 20:46:46.402590] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed4e8 00:33:30.462 [2024-05-13 20:46:46.404113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.462 [2024-05-13 20:46:46.404129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.413988] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e12d8 00:33:30.724 [2024-05-13 20:46:46.415527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.415542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.423166] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f2510 00:33:30.724 [2024-05-13 20:46:46.424064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.424079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.436773] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed0b0 00:33:30.724 [2024-05-13 20:46:46.438249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.438265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.445936] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ebfd0 00:33:30.724 [2024-05-13 20:46:46.446835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.446850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.457395] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e84c0 00:33:30.724 [2024-05-13 20:46:46.458282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.458297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.471012] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:30.724 [2024-05-13 20:46:46.472531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.472546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.480176] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fc128 00:33:30.724 [2024-05-13 20:46:46.481073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.492859] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ddc00 00:33:30.724 [2024-05-13 20:46:46.494208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.494223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.502950] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fc128 00:33:30.724 [2024-05-13 20:46:46.503840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.503855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.516606] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f57b0 00:33:30.724 [2024-05-13 20:46:46.518122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.518137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.525784] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed4e8 00:33:30.724 [2024-05-13 20:46:46.526667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.526682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.538449] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e8d30 00:33:30.724 [2024-05-13 20:46:46.539798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.539814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.550800] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e5a90 00:33:30.724 [2024-05-13 20:46:46.552323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.552339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.559994] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6020 00:33:30.724 [2024-05-13 20:46:46.560857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.560873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.573712] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e84c0 00:33:30.724 [2024-05-13 20:46:46.575222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.575237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.582902] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fda78 00:33:30.724 [2024-05-13 20:46:46.583805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.583823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.596528] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fc128 00:33:30.724 [2024-05-13 20:46:46.598014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.598029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.605700] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e3498 00:33:30.724 [2024-05-13 20:46:46.606605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.606620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.618401] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f4f40 00:33:30.724 [2024-05-13 20:46:46.619755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.619771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.630696] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e3d08 00:33:30.724 [2024-05-13 20:46:46.632217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.639884] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed0b0 00:33:30.724 [2024-05-13 20:46:46.640785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.640801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.653533] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6020 00:33:30.724 [2024-05-13 20:46:46.655049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.655064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.724 [2024-05-13 20:46:46.662730] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7da8 00:33:30.724 [2024-05-13 20:46:46.663637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.724 [2024-05-13 20:46:46.663651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.675420] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190eb760 00:33:30.986 [2024-05-13 20:46:46.676768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.676783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.687724] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ee190 00:33:30.986 [2024-05-13 20:46:46.689212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.689227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.696881] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fc128 00:33:30.986 [2024-05-13 20:46:46.697773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.697788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.709581] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f6cc8 00:33:30.986 [2024-05-13 20:46:46.710923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.710938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.719111] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0788 00:33:30.986 [2024-05-13 20:46:46.719990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.720005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.731433] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e0ea0 00:33:30.986 [2024-05-13 20:46:46.732150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.732166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.744345] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f7100 00:33:30.986 [2024-05-13 20:46:46.745845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.745861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.753510] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e0ea0 00:33:30.986 [2024-05-13 20:46:46.754387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.754402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.765828] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ed4e8 00:33:30.986 [2024-05-13 20:46:46.766891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.766907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.776302] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190de470 00:33:30.986 [2024-05-13 20:46:46.777194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.777208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.789902] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190eaab8 00:33:30.986 [2024-05-13 20:46:46.791407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.791422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.799071] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190de470 00:33:30.986 [2024-05-13 20:46:46.799990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.986 [2024-05-13 20:46:46.800005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.986 [2024-05-13 20:46:46.811377] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190dece0 00:33:30.987 [2024-05-13 20:46:46.812422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.812437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.822775] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e1710 00:33:30.987 [2024-05-13 20:46:46.823819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.835685] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:30.987 [2024-05-13 20:46:46.837194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.837209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.844854] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e0ea0 00:33:30.987 [2024-05-13 20:46:46.845736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.845751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.857181] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190de470 00:33:30.987 [2024-05-13 20:46:46.858239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.858255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.867671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fcdd0 00:33:30.987 [2024-05-13 20:46:46.868520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.868535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.881270] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e73e0 00:33:30.987 [2024-05-13 20:46:46.882772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.882789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.891422] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fc998 00:33:30.987 [2024-05-13 20:46:46.892470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.892486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.901911] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190fcdd0 00:33:30.987 [2024-05-13 20:46:46.902788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.902803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.914212] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e73e0 00:33:30.987 [2024-05-13 20:46:46.915231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.915246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.987 [2024-05-13 20:46:46.926896] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0788 00:33:30.987 [2024-05-13 20:46:46.928364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.987 [2024-05-13 20:46:46.928379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:46.936056] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f20d8 00:33:31.249 [2024-05-13 20:46:46.936951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:46.936966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:46.948387] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4578 00:33:31.249 [2024-05-13 20:46:46.949433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:46.949448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:46.958889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.249 [2024-05-13 20:46:46.959765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:46.959780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:46.971210] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f20d8 00:33:31.249 [2024-05-13 20:46:46.972228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:46.972243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:46.981680] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e73e0 00:33:31.249 [2024-05-13 20:46:46.982535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:46.982549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:46.994033] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.249 [2024-05-13 20:46:46.995090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:46.995105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.005186] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:31.249 [2024-05-13 20:46:47.005880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.005896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.018102] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e01f8 00:33:31.249 [2024-05-13 20:46:47.019611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.019626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.027279] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:31.249 [2024-05-13 20:46:47.028161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.028176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.039580] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190dece0 00:33:31.249 [2024-05-13 20:46:47.040603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.040619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.050037] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f31b8 00:33:31.249 [2024-05-13 20:46:47.050920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.050936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.062359] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:31.249 [2024-05-13 20:46:47.063393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.063408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.072819] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.249 [2024-05-13 20:46:47.073697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.073712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.085182] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f31b8 00:33:31.249 [2024-05-13 20:46:47.086237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.249 [2024-05-13 20:46:47.086253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.249 [2024-05-13 20:46:47.095664] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ecc78 00:33:31.249 [2024-05-13 20:46:47.096538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.096554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.107979] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.250 [2024-05-13 20:46:47.109035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.109050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.119150] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e6738 00:33:31.250 [2024-05-13 20:46:47.119839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.119854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.130768] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190ecc78 00:33:31.250 [2024-05-13 20:46:47.131824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.131840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.142202] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f20d8 00:33:31.250 [2024-05-13 20:46:47.143239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.143254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.154885] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0788 00:33:31.250 [2024-05-13 20:46:47.156386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.156401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.164046] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190dece0 00:33:31.250 [2024-05-13 20:46:47.164927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.164942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.177671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e38d0 00:33:31.250 [2024-05-13 20:46:47.179178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.179196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.250 [2024-05-13 20:46:47.187762] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f96f8 00:33:31.250 [2024-05-13 20:46:47.188805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.250 [2024-05-13 20:46:47.188820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.198248] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190dece0 00:33:31.511 [2024-05-13 20:46:47.199137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.199152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.210585] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e38d0 00:33:31.511 [2024-05-13 20:46:47.211613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.211628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.221058] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:31.511 [2024-05-13 20:46:47.221936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.221951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.233390] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190dece0 00:33:31.511 [2024-05-13 20:46:47.234446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.234461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.243865] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f31b8 00:33:31.511 [2024-05-13 20:46:47.244746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.244761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.256172] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190efae0 00:33:31.511 [2024-05-13 20:46:47.257230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.257246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.266678] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.511 [2024-05-13 20:46:47.267520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.267536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.280284] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190eaab8 00:33:31.511 [2024-05-13 20:46:47.281802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.281817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.289450] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.511 [2024-05-13 20:46:47.290326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.290341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.303048] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e0ea0 00:33:31.511 [2024-05-13 20:46:47.304533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.304549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.313149] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e73e0 00:33:31.511 [2024-05-13 20:46:47.314200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.314214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.323627] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e4140 00:33:31.511 [2024-05-13 20:46:47.324513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.324528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.335979] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e0ea0 00:33:31.511 [2024-05-13 20:46:47.337034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.337049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.348668] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e9168 00:33:31.511 [2024-05-13 20:46:47.350181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.350196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.357858] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190f0ff8 00:33:31.511 [2024-05-13 20:46:47.358738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.358752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.370199] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e23b8 00:33:31.511 [2024-05-13 20:46:47.371255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.371271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.380671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190eaab8 00:33:31.511 [2024-05-13 20:46:47.381524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.381539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.394307] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e38d0 00:33:31.511 [2024-05-13 20:46:47.395821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.511 [2024-05-13 20:46:47.395836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:31.511 [2024-05-13 20:46:47.403479] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190eaab8 00:33:31.512 [2024-05-13 20:46:47.404350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.512 [2024-05-13 20:46:47.404365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:31.512 [2024-05-13 20:46:47.415817] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972790) with pdu=0x2000190e27f0 00:33:31.512 [2024-05-13 20:46:47.416876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.512 [2024-05-13 20:46:47.416892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:31.512 00:33:31.512 Latency(us) 00:33:31.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.512 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.512 nvme0n1 : 2.00 22372.67 87.39 0.00 0.00 5713.81 2157.23 13981.01 00:33:31.512 =================================================================================================================== 00:33:31.512 Total : 22372.67 87.39 0.00 0.00 5713.81 2157.23 13981.01 00:33:31.512 0 00:33:31.512 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:31.512 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:31.512 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:31.512 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:31.512 | .driver_specific 00:33:31.512 | .nvme_error 00:33:31.512 | .status_code 00:33:31.512 | .command_transient_transport_error' 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 175 > 0 )) 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3287888 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3287888 ']' 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3287888 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3287888 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3287888' 00:33:31.772 killing process with pid 3287888 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3287888 00:33:31.772 Received shutdown signal, test time was about 2.000000 seconds 00:33:31.772 00:33:31.772 Latency(us) 00:33:31.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.772 =================================================================================================================== 00:33:31.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.772 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3287888 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3288656 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3288656 /var/tmp/bperf.sock 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3288656 ']' 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.034 20:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.034 [2024-05-13 20:46:47.825532] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:32.034 [2024-05-13 20:46:47.825591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288656 ] 00:33:32.034 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:32.034 Zero copy mechanism will not be used. 00:33:32.034 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.034 [2024-05-13 20:46:47.905325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.034 [2024-05-13 20:46:47.958870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.976 20:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.238 nvme0n1 00:33:33.238 20:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:33.238 20:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.238 20:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.238 20:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.238 20:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.238 20:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:33.238 Zero copy mechanism will not be used. 00:33:33.238 Running I/O for 2 seconds... 00:33:33.238 [2024-05-13 20:46:49.171957] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.238 [2024-05-13 20:46:49.172352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.238 [2024-05-13 20:46:49.172378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.182941] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.183305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.183330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.192780] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.193246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.193264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.201509] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.201831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.201849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.210657] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.211026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.220920] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.221264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.221281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.228021] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.228240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.228257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.234727] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.235070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.235087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.241109] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.241330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.241346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.248486] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.248837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.248854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.255161] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.255508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.264909] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.265254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.265271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.272519] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.272734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.272750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.279764] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.279981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.279996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.499 [2024-05-13 20:46:49.290216] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.499 [2024-05-13 20:46:49.290561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.499 [2024-05-13 20:46:49.290579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.300698] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.301034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.301050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.310630] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.310962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.310978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.320597] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.320928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.330683] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.331017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.331034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.339809] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.340232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.340250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.350455] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.350782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.350799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.360825] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.361156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.361172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.370798] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.371054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.371072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.381854] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.382278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.391116] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.391347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.391363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.400230] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.400569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.400586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.410136] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.410480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.410498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.420760] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.421089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.421105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.429989] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.430219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.430235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.500 [2024-05-13 20:46:49.438841] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.500 [2024-05-13 20:46:49.439067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.500 [2024-05-13 20:46:49.439084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.448110] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.448343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.448359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.457135] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.457379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.457395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.466517] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.466615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.466629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.477971] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.478201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.478217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.488972] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.489326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.489343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.499859] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.500196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.500213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.510417] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.510514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.510529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.522235] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.522581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.522598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.762 [2024-05-13 20:46:49.534083] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.762 [2024-05-13 20:46:49.534443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.762 [2024-05-13 20:46:49.534460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.546481] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.546761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.546777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.558704] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.559045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.559062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.569702] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.570039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.570055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.582132] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.582372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.582388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.592831] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.593183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.593200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.604689] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.605029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.605046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.615590] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.615919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.615935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.625700] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.626165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.626183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.637273] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.637632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.637649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.649386] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.649748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.649769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.657872] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.658209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.658226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.666904] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.667228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.667244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.675434] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.675773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.675790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.685212] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.685628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.685645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.763 [2024-05-13 20:46:49.696291] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:33.763 [2024-05-13 20:46:49.696685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.763 [2024-05-13 20:46:49.696702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.025 [2024-05-13 20:46:49.706689] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.025 [2024-05-13 20:46:49.706806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.025 [2024-05-13 20:46:49.706820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.025 [2024-05-13 20:46:49.717665] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.025 [2024-05-13 20:46:49.718006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.025 [2024-05-13 20:46:49.718023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.025 [2024-05-13 20:46:49.728001] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.025 [2024-05-13 20:46:49.728373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.025 [2024-05-13 20:46:49.728390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.025 [2024-05-13 20:46:49.739933] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.025 [2024-05-13 20:46:49.740404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.025 [2024-05-13 20:46:49.740421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.025 [2024-05-13 20:46:49.751976] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.025 [2024-05-13 20:46:49.752385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.025 [2024-05-13 20:46:49.752400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.764572] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.764760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.764774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.775337] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.775675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.775691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.789229] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.789597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.789614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.801154] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.801605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.801623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.813322] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.813673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.813689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.824917] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.825272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.825289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.835638] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.835787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.835803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.846327] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.846659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.846675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.856734] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.857083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.857099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.867310] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.867645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.867661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.877580] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.877954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.877971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.888604] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.888824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.888840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.898637] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.898982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.908722] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.908896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.908911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.918455] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.918578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.918592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.929983] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.930349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.930369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.939373] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.939702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.949986] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.950320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.950336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.026 [2024-05-13 20:46:49.959977] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.026 [2024-05-13 20:46:49.960098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.026 [2024-05-13 20:46:49.960113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.288 [2024-05-13 20:46:49.970520] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.288 [2024-05-13 20:46:49.970750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:49.970766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:49.980839] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:49.981160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:49.981176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:49.992029] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:49.992386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:49.992403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.002854] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.003248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.003275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.011164] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.011523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.011544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.019829] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.020155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.020172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.027809] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.028224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.028253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.035665] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.035753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.035769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.046152] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.046389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.046407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.057395] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.057868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.057900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.066968] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.067364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.067383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.078557] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.079013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.079032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.088664] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.089093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.089116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.099676] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.099906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.099927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.108799] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.109253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.109273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.118960] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.119303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.119326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.128852] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.129300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.139540] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.139869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.139887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.149654] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.149883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.149899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.161457] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.161803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.161819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.172290] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.172788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.172804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.185024] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.185349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.185366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.197075] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.197190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.197205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.208054] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.208487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.208505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.219857] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.289 [2024-05-13 20:46:50.220132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.289 [2024-05-13 20:46:50.220149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.289 [2024-05-13 20:46:50.231446] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.290 [2024-05-13 20:46:50.231817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.290 [2024-05-13 20:46:50.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.243093] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.243552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.243569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.254684] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.254989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.255006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.266729] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.266995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.267012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.277179] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.277505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.277522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.288947] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.289427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.289446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.301166] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.301416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.301432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.313148] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.313605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.313623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.325093] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.325534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.325550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.336717] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.337135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.337151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.348267] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.552 [2024-05-13 20:46:50.348716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-13 20:46:50.348732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.552 [2024-05-13 20:46:50.359756] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.360160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.360177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.371734] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.372200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.372217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.382098] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.382576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.382594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.391018] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.391382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.391402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.401132] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.401485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.401502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.408117] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.408535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.408551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.416983] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.417355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.417371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.422287] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.422497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.422513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.430837] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.431185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.431201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.438385] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.438604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.438620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.446675] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.446974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.446991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.453775] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.453974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.453989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.460400] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.460607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.460623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.466221] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.466586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.473753] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.474130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.474146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.483386] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.483598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.553 [2024-05-13 20:46:50.491681] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.553 [2024-05-13 20:46:50.492193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.553 [2024-05-13 20:46:50.492210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.501308] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.501607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.501624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.512140] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.512567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.512585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.522485] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.522870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.522886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.533661] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.533996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.534013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.543466] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.543801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.543817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.552756] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.553026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.553043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.562612] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.562997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.563013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.571822] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.572158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.572175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.579720] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.579929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.579945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.587710] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.587940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.587955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.595423] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.595629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.595645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.603955] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.604156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.604172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.610847] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.611189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.815 [2024-05-13 20:46:50.617367] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.815 [2024-05-13 20:46:50.617570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.815 [2024-05-13 20:46:50.617586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.625456] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.625668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.625684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.634711] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.635057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.635074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.642776] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.643205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.643222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.651670] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.652022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.652039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.660560] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.660909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.660925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.669372] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.669576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.669592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.677518] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.677981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.677998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.686477] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.686865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.686881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.695147] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.695593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.695610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.703553] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.703823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.703839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.712454] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.712833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.712850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.718561] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.718898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.718914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.724418] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.724783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.724799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.732557] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.732774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.738965] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.739271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.739287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.747141] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.747418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.747439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.816 [2024-05-13 20:46:50.755772] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:34.816 [2024-05-13 20:46:50.755979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.816 [2024-05-13 20:46:50.755995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.764366] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.764708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.764724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.771293] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.771612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.771629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.777395] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.777597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.783997] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.784298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.784319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.791795] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.792003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.792019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.797300] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.797606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.797623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.805715] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.806040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.806056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.813078] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.813437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.813454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.820339] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.820646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.820663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.828045] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.828376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.828392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.836952] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.837275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.078 [2024-05-13 20:46:50.837292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.078 [2024-05-13 20:46:50.844412] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.078 [2024-05-13 20:46:50.844758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.844774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.852956] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.853332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.853349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.859228] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.859588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.859604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.867078] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.867383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.867399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.875642] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.876125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.876144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.884519] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.884807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.884824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.893832] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.894079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.894095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.900360] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.900701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.900717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.906907] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.907126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.907143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.912570] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.912907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.912923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.920062] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.920277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.920293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.928560] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.928798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.928814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.936894] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.937233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.937249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.944498] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.944722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.944741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.952381] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.952618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.952635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.960032] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.960443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.970208] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.970594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.970611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.981256] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.981600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.981617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.991001] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.991459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.991475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:50.999591] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:50.999904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:50.999921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:51.010256] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:51.010476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:51.010492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.079 [2024-05-13 20:46:51.018983] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.079 [2024-05-13 20:46:51.019340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.079 [2024-05-13 20:46:51.019357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.027380] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.027603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.027619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.036014] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.036369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.036385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.045185] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.045590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.045607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.053221] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.053468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.053484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.061484] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.061840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.061857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.070705] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.071136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.071152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.078505] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.078710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.078726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.087365] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.087743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.087759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.095480] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.095778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.095795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.103905] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.104348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.104364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.111461] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.111730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.111747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.121961] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.122379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.122396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.130606] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.130953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.130969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.139545] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.139853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.139870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.149954] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.150234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.150251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.341 [2024-05-13 20:46:51.160421] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1972ad0) with pdu=0x2000190fef90 00:33:35.341 [2024-05-13 20:46:51.160653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.341 [2024-05-13 20:46:51.160671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.341 00:33:35.342 Latency(us) 00:33:35.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.342 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:35.342 nvme0n1 : 2.00 3273.79 409.22 0.00 0.00 4878.67 2334.72 13107.20 00:33:35.342 =================================================================================================================== 00:33:35.342 Total : 3273.79 409.22 0.00 0.00 4878.67 2334.72 13107.20 00:33:35.342 0 00:33:35.342 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.342 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.342 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.342 | .driver_specific 00:33:35.342 | .nvme_error 00:33:35.342 | .status_code 00:33:35.342 | .command_transient_transport_error' 00:33:35.342 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3288656 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3288656 ']' 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3288656 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3288656 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3288656' 00:33:35.603 killing process with pid 3288656 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3288656 00:33:35.603 Received shutdown signal, test time was about 2.000000 seconds 00:33:35.603 00:33:35.603 Latency(us) 00:33:35.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.603 =================================================================================================================== 00:33:35.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3288656 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3286240 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3286240 ']' 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3286240 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:35.603 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3286240 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3286240' 00:33:35.863 killing process with pid 3286240 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3286240 00:33:35.863 [2024-05-13 20:46:51.581980] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3286240 00:33:35.863 00:33:35.863 real 0m16.224s 00:33:35.863 user 0m31.838s 00:33:35.863 sys 0m3.349s 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.863 ************************************ 00:33:35.863 END TEST nvmf_digest_error 00:33:35.863 ************************************ 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:35.863 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:35.863 rmmod nvme_tcp 00:33:35.863 rmmod nvme_fabrics 00:33:35.863 rmmod nvme_keyring 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3286240 ']' 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3286240 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3286240 ']' 00:33:36.124 20:46:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3286240 00:33:36.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3286240) - No such process 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3286240 is not found' 00:33:36.125 Process with pid 3286240 is not found 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.125 20:46:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.040 20:46:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:38.040 00:33:38.040 real 0m43.138s 00:33:38.040 user 1m6.373s 00:33:38.040 sys 0m12.706s 00:33:38.040 20:46:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:38.040 20:46:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:38.040 ************************************ 00:33:38.040 END TEST nvmf_digest 00:33:38.040 ************************************ 00:33:38.040 20:46:53 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:33:38.040 20:46:53 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:33:38.040 20:46:53 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:33:38.040 20:46:53 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:38.040 20:46:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:38.040 20:46:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:38.040 20:46:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.302 ************************************ 00:33:38.302 START TEST nvmf_bdevperf 00:33:38.302 ************************************ 00:33:38.302 20:46:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:38.302 * Looking for test storage... 00:33:38.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:38.302 20:46:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:46.454 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:46.454 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:46.454 Found net devices under 0000:31:00.0: cvl_0_0 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:46.454 Found net devices under 0000:31:00.1: cvl_0_1 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:46.454 20:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.454 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.454 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.454 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:46.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:33:46.454 00:33:46.454 --- 10.0.0.2 ping statistics --- 00:33:46.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.455 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:33:46.455 00:33:46.455 --- 10.0.0.1 ping statistics --- 00:33:46.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.455 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3294077 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3294077 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3294077 ']' 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:46.455 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.455 [2024-05-13 20:47:02.175000] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:46.455 [2024-05-13 20:47:02.175047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.455 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.455 [2024-05-13 20:47:02.265756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:46.455 [2024-05-13 20:47:02.330703] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.455 [2024-05-13 20:47:02.330742] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.455 [2024-05-13 20:47:02.330749] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.455 [2024-05-13 20:47:02.330755] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.455 [2024-05-13 20:47:02.330761] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.455 [2024-05-13 20:47:02.332329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.455 [2024-05-13 20:47:02.332451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.455 [2024-05-13 20:47:02.332568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.025 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:47.025 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:47.025 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.025 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.025 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.285 [2024-05-13 20:47:02.983938] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.285 20:47:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.285 Malloc0 00:33:47.285 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.285 20:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:47.285 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.285 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.286 [2024-05-13 20:47:03.053723] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:47.286 [2024-05-13 20:47:03.053978] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:47.286 { 00:33:47.286 "params": { 00:33:47.286 "name": "Nvme$subsystem", 00:33:47.286 "trtype": "$TEST_TRANSPORT", 00:33:47.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.286 "adrfam": "ipv4", 00:33:47.286 "trsvcid": "$NVMF_PORT", 00:33:47.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.286 "hdgst": ${hdgst:-false}, 00:33:47.286 "ddgst": ${ddgst:-false} 00:33:47.286 }, 00:33:47.286 "method": "bdev_nvme_attach_controller" 00:33:47.286 } 00:33:47.286 EOF 00:33:47.286 )") 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:47.286 20:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:47.286 "params": { 00:33:47.286 "name": "Nvme1", 00:33:47.286 "trtype": "tcp", 00:33:47.286 "traddr": "10.0.0.2", 00:33:47.286 "adrfam": "ipv4", 00:33:47.286 "trsvcid": "4420", 00:33:47.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:47.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:47.286 "hdgst": false, 00:33:47.286 "ddgst": false 00:33:47.286 }, 00:33:47.286 "method": "bdev_nvme_attach_controller" 00:33:47.286 }' 00:33:47.286 [2024-05-13 20:47:03.106456] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:47.286 [2024-05-13 20:47:03.106506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294209 ] 00:33:47.286 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.286 [2024-05-13 20:47:03.171982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.546 [2024-05-13 20:47:03.236587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.546 Running I/O for 1 seconds... 00:33:48.927 00:33:48.927 Latency(us) 00:33:48.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.927 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:48.927 Verification LBA range: start 0x0 length 0x4000 00:33:48.927 Nvme1n1 : 1.01 8975.69 35.06 0.00 0.00 14199.97 1952.43 15947.09 00:33:48.927 =================================================================================================================== 00:33:48.928 Total : 8975.69 35.06 0.00 0.00 14199.97 1952.43 15947.09 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3294541 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:48.928 { 00:33:48.928 "params": { 00:33:48.928 "name": "Nvme$subsystem", 00:33:48.928 "trtype": "$TEST_TRANSPORT", 00:33:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.928 "adrfam": "ipv4", 00:33:48.928 "trsvcid": "$NVMF_PORT", 00:33:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.928 "hdgst": ${hdgst:-false}, 00:33:48.928 "ddgst": ${ddgst:-false} 00:33:48.928 }, 00:33:48.928 "method": "bdev_nvme_attach_controller" 00:33:48.928 } 00:33:48.928 EOF 00:33:48.928 )") 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:48.928 20:47:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:48.928 "params": { 00:33:48.928 "name": "Nvme1", 00:33:48.928 "trtype": "tcp", 00:33:48.928 "traddr": "10.0.0.2", 00:33:48.928 "adrfam": "ipv4", 00:33:48.928 "trsvcid": "4420", 00:33:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:48.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:48.928 "hdgst": false, 00:33:48.928 "ddgst": false 00:33:48.928 }, 00:33:48.928 "method": "bdev_nvme_attach_controller" 00:33:48.928 }' 00:33:48.928 [2024-05-13 20:47:04.613211] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:48.928 [2024-05-13 20:47:04.613266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294541 ] 00:33:48.928 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.928 [2024-05-13 20:47:04.678585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.928 [2024-05-13 20:47:04.740805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.191 Running I/O for 15 seconds... 00:33:51.743 20:47:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3294077 00:33:51.743 20:47:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:51.743 [2024-05-13 20:47:07.583787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.583988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.583997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.743 [2024-05-13 20:47:07.584481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.743 [2024-05-13 20:47:07.584490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.744 [2024-05-13 20:47:07.584853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.584991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.584997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.744 [2024-05-13 20:47:07.585144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.744 [2024-05-13 20:47:07.585152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.745 [2024-05-13 20:47:07.585685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.745 [2024-05-13 20:47:07.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.745 [2024-05-13 20:47:07.585826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.585988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.585995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.586004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.586011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.586020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.746 [2024-05-13 20:47:07.586027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.586035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3520 is same with the state(5) to be set 00:33:51.746 [2024-05-13 20:47:07.586044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:51.746 [2024-05-13 20:47:07.586050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:51.746 [2024-05-13 20:47:07.586056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:33:51.746 [2024-05-13 20:47:07.586065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.746 [2024-05-13 20:47:07.586103] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea3520 was disconnected and freed. reset controller. 00:33:51.746 [2024-05-13 20:47:07.589648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.746 [2024-05-13 20:47:07.589694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.746 [2024-05-13 20:47:07.590628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.591047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.591060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.746 [2024-05-13 20:47:07.591069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.746 [2024-05-13 20:47:07.591309] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.746 [2024-05-13 20:47:07.591540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.746 [2024-05-13 20:47:07.591548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.746 [2024-05-13 20:47:07.591556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.746 [2024-05-13 20:47:07.595074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.746 [2024-05-13 20:47:07.603791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.746 [2024-05-13 20:47:07.604287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.604736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.604773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.746 [2024-05-13 20:47:07.604784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.746 [2024-05-13 20:47:07.605023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.746 [2024-05-13 20:47:07.605243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.746 [2024-05-13 20:47:07.605253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.746 [2024-05-13 20:47:07.605260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.746 [2024-05-13 20:47:07.608787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.746 [2024-05-13 20:47:07.617722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.746 [2024-05-13 20:47:07.618266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.618748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.618785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.746 [2024-05-13 20:47:07.618795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.746 [2024-05-13 20:47:07.619032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.746 [2024-05-13 20:47:07.619252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.746 [2024-05-13 20:47:07.619260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.746 [2024-05-13 20:47:07.619268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.746 [2024-05-13 20:47:07.622792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.746 [2024-05-13 20:47:07.631497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.746 [2024-05-13 20:47:07.632078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.632372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.632385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.746 [2024-05-13 20:47:07.632393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.746 [2024-05-13 20:47:07.632613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.746 [2024-05-13 20:47:07.632831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.746 [2024-05-13 20:47:07.632843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.746 [2024-05-13 20:47:07.632851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.746 [2024-05-13 20:47:07.636370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.746 [2024-05-13 20:47:07.645266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.746 [2024-05-13 20:47:07.645974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.646348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.646362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.746 [2024-05-13 20:47:07.646372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.746 [2024-05-13 20:47:07.646609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.746 [2024-05-13 20:47:07.646829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.746 [2024-05-13 20:47:07.646837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.746 [2024-05-13 20:47:07.646844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.746 [2024-05-13 20:47:07.650366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.746 [2024-05-13 20:47:07.659060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.746 [2024-05-13 20:47:07.659747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.660114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.746 [2024-05-13 20:47:07.660127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.746 [2024-05-13 20:47:07.660136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.747 [2024-05-13 20:47:07.660380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.747 [2024-05-13 20:47:07.660601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.747 [2024-05-13 20:47:07.660609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.747 [2024-05-13 20:47:07.660617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.747 [2024-05-13 20:47:07.664128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.747 [2024-05-13 20:47:07.672815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:51.747 [2024-05-13 20:47:07.673413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.747 [2024-05-13 20:47:07.673781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:51.747 [2024-05-13 20:47:07.673794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:51.747 [2024-05-13 20:47:07.673803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:51.747 [2024-05-13 20:47:07.674040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:51.747 [2024-05-13 20:47:07.674260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:51.747 [2024-05-13 20:47:07.674268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:51.747 [2024-05-13 20:47:07.674279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:51.747 [2024-05-13 20:47:07.677799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.010 [2024-05-13 20:47:07.686694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.010 [2024-05-13 20:47:07.687374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.687779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.687791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.010 [2024-05-13 20:47:07.687801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.010 [2024-05-13 20:47:07.688037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.010 [2024-05-13 20:47:07.688257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.010 [2024-05-13 20:47:07.688265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.010 [2024-05-13 20:47:07.688272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.010 [2024-05-13 20:47:07.691789] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.010 [2024-05-13 20:47:07.700480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.010 [2024-05-13 20:47:07.701179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.701518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.701532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.010 [2024-05-13 20:47:07.701541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.010 [2024-05-13 20:47:07.701777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.010 [2024-05-13 20:47:07.701998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.010 [2024-05-13 20:47:07.702006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.010 [2024-05-13 20:47:07.702014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.010 [2024-05-13 20:47:07.705532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.010 [2024-05-13 20:47:07.714429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.010 [2024-05-13 20:47:07.715077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.715440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.715454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.010 [2024-05-13 20:47:07.715464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.010 [2024-05-13 20:47:07.715701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.010 [2024-05-13 20:47:07.715921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.010 [2024-05-13 20:47:07.715930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.010 [2024-05-13 20:47:07.715937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.010 [2024-05-13 20:47:07.719474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.010 [2024-05-13 20:47:07.728371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.010 [2024-05-13 20:47:07.729016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.729388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.729402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.010 [2024-05-13 20:47:07.729411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.010 [2024-05-13 20:47:07.729648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.010 [2024-05-13 20:47:07.729868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.010 [2024-05-13 20:47:07.729877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.010 [2024-05-13 20:47:07.729884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.010 [2024-05-13 20:47:07.733405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.010 [2024-05-13 20:47:07.742296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.010 [2024-05-13 20:47:07.743000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.743367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.010 [2024-05-13 20:47:07.743382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.010 [2024-05-13 20:47:07.743391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.010 [2024-05-13 20:47:07.743628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.010 [2024-05-13 20:47:07.743848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.010 [2024-05-13 20:47:07.743856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.010 [2024-05-13 20:47:07.743863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.010 [2024-05-13 20:47:07.747378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.756073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.756631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.757046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.757059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.757068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.757305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.757540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.757550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.757558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.761074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.769980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.770675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.771081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.771093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.771102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.771346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.771567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.771576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.771583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.775094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.783781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.784460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.784824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.784836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.784845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.785082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.785302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.785310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.785327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.788839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.797529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.798225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.798578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.798592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.798601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.798837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.799057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.799067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.799074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.802589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.811274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.811938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.812302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.812322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.812332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.812569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.812789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.812797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.812804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.816318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.825220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.825733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.826041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.826051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.826058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.826276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.826498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.826506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.826513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.830018] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.839142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.839914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.840284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.840297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.840306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.840551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.840772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.840780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.840787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.844302] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.853006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.853688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.854132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.854144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.854154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.854398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.854619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.854629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.854637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.858159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.866865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.011 [2024-05-13 20:47:07.867549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.867926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.011 [2024-05-13 20:47:07.867939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.011 [2024-05-13 20:47:07.867948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.011 [2024-05-13 20:47:07.868185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.011 [2024-05-13 20:47:07.868411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.011 [2024-05-13 20:47:07.868420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.011 [2024-05-13 20:47:07.868427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.011 [2024-05-13 20:47:07.871946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.011 [2024-05-13 20:47:07.880635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.012 [2024-05-13 20:47:07.881356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.881776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.881788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.012 [2024-05-13 20:47:07.881798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.012 [2024-05-13 20:47:07.882035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.012 [2024-05-13 20:47:07.882255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.012 [2024-05-13 20:47:07.882263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.012 [2024-05-13 20:47:07.882270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.012 [2024-05-13 20:47:07.885788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.012 [2024-05-13 20:47:07.894473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.012 [2024-05-13 20:47:07.895030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.895396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.895407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.012 [2024-05-13 20:47:07.895419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.012 [2024-05-13 20:47:07.895637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.012 [2024-05-13 20:47:07.895853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.012 [2024-05-13 20:47:07.895861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.012 [2024-05-13 20:47:07.895868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.012 [2024-05-13 20:47:07.899381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.012 [2024-05-13 20:47:07.908266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.012 [2024-05-13 20:47:07.908971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.909256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.909269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.012 [2024-05-13 20:47:07.909278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.012 [2024-05-13 20:47:07.909525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.012 [2024-05-13 20:47:07.909746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.012 [2024-05-13 20:47:07.909755] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.012 [2024-05-13 20:47:07.909762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.012 [2024-05-13 20:47:07.913272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.012 [2024-05-13 20:47:07.922184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.012 [2024-05-13 20:47:07.922848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.923130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.923144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.012 [2024-05-13 20:47:07.923153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.012 [2024-05-13 20:47:07.923398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.012 [2024-05-13 20:47:07.923619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.012 [2024-05-13 20:47:07.923627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.012 [2024-05-13 20:47:07.923635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.012 [2024-05-13 20:47:07.927149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.012 [2024-05-13 20:47:07.936053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.012 [2024-05-13 20:47:07.936729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.937102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.937114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.012 [2024-05-13 20:47:07.937123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.012 [2024-05-13 20:47:07.937373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.012 [2024-05-13 20:47:07.937594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.012 [2024-05-13 20:47:07.937603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.012 [2024-05-13 20:47:07.937610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.012 [2024-05-13 20:47:07.941123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.012 [2024-05-13 20:47:07.949810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.012 [2024-05-13 20:47:07.950600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.950967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.012 [2024-05-13 20:47:07.950980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.012 [2024-05-13 20:47:07.950989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.012 [2024-05-13 20:47:07.951226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.012 [2024-05-13 20:47:07.951453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.012 [2024-05-13 20:47:07.951463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.012 [2024-05-13 20:47:07.951470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.275 [2024-05-13 20:47:07.954983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.275 [2024-05-13 20:47:07.963676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.275 [2024-05-13 20:47:07.964289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.275 [2024-05-13 20:47:07.964545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:07.964556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:07.964564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:07.964781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:07.964998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:07.965005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:07.965012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:07.968522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:07.977615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:07.978177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:07.978521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:07.978532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:07.978539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:07.978756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:07.978977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:07.978985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:07.978992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:07.982501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:07.991386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:07.992065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:07.992427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:07.992441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:07.992451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:07.992688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:07.992908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:07.992922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:07.992929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:07.996443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.005334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.006022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.006307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.006329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.006339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.006576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.006796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.006804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:08.006811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:08.010324] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.019222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.019883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.020250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.020263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.020272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.020517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.020737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.020750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:08.020757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:08.024267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.033158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.033837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.034203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.034216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.034225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.034470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.034691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.034699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:08.034706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:08.038214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.047106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.047776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.048150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.048162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.048172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.048417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.048637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.048645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:08.048653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:08.052162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.061057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.061518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.061905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.061916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.061924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.062141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.062363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.062372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:08.062383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:08.065898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.074988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.075659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.076024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.076037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.076047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.076283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.076511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.076520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.276 [2024-05-13 20:47:08.076528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.276 [2024-05-13 20:47:08.080040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.276 [2024-05-13 20:47:08.088777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.276 [2024-05-13 20:47:08.089407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.089832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.276 [2024-05-13 20:47:08.089845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.276 [2024-05-13 20:47:08.089855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.276 [2024-05-13 20:47:08.090092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.276 [2024-05-13 20:47:08.090321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.276 [2024-05-13 20:47:08.090331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.090339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.093852] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.102540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.103211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.103582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.103597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.103606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.103843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.104063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.104071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.104078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.107603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.116298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.116882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.117251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.117264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.117274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.117528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.117749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.117757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.117765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.121284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.130194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.130874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.131243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.131255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.131265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.131508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.131729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.131738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.131746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.135257] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.143951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.144628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.144999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.145012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.145022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.145259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.145485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.145495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.145502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.149015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.157719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.158439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.158808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.158820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.158829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.159066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.159285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.159294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.159302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.162819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.171514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.172081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.172616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.172654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.172664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.172902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.173122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.173130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.173138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.176657] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.185350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.186025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.186549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.186586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.186596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.186832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.187053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.187061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.187069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.190589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.199277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.200002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.200356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.200369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.200378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.200615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.200835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.200843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.200850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.277 [2024-05-13 20:47:08.204372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.277 [2024-05-13 20:47:08.213061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.277 [2024-05-13 20:47:08.213730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.214153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.277 [2024-05-13 20:47:08.214166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.277 [2024-05-13 20:47:08.214175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.277 [2024-05-13 20:47:08.214419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.277 [2024-05-13 20:47:08.214639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.277 [2024-05-13 20:47:08.214647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.277 [2024-05-13 20:47:08.214654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.218182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.226878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.227557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.227962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.227974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.227983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.228220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.228448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.228457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.228465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.231980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.240664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.241383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.241654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.241666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.241676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.241912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.242132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.242140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.242147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.245666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.254569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.255275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.255627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.255640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.255650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.255886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.256106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.256115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.256122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.259642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.268329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.269045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.269291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.269305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.269322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.269560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.269780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.269788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.269796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.273318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.282209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.282885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.283261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.283274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.283287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.283531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.283752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.283760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.283768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.287281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.295967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.296590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.296929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.296938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.296946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.297163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.297384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.297392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.297399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.300907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.309791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.310394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.310783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.310795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.310804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.311041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.311260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.311268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.311276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.314799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.323703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.324306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.324690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.324702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.324712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.541 [2024-05-13 20:47:08.324952] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.541 [2024-05-13 20:47:08.325172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.541 [2024-05-13 20:47:08.325180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.541 [2024-05-13 20:47:08.325188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.541 [2024-05-13 20:47:08.328703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.541 [2024-05-13 20:47:08.337593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.541 [2024-05-13 20:47:08.338306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.338673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.541 [2024-05-13 20:47:08.338686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.541 [2024-05-13 20:47:08.338695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.338931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.339151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.339160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.339167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.342682] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.351373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.351970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.352340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.352354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.352363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.352600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.352819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.352827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.352835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.356350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.365238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.365820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.366042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.366053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.366061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.366278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.366505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.366513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.366520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.370030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.379128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.379781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.380216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.380229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.380238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.380483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.380704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.380712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.380719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.384235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.392920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.393447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.393822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.393834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.393844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.394080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.394299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.394308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.394324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.397839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.406730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.407432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.407741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.407754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.407764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.408000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.408220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.408233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.408240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.411759] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.420664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.421374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.421705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.421717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.421727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.421963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.422184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.422192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.422199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.425717] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.434620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.435360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.435587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.435599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.435608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.435844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.436065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.436073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.436080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.439599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.448494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.449083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.449401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.449412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.449420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.449637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.449855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.449862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.542 [2024-05-13 20:47:08.449873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.542 [2024-05-13 20:47:08.453387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.542 [2024-05-13 20:47:08.462276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.542 [2024-05-13 20:47:08.462933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.463292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.542 [2024-05-13 20:47:08.463304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.542 [2024-05-13 20:47:08.463322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.542 [2024-05-13 20:47:08.463560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.542 [2024-05-13 20:47:08.463780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.542 [2024-05-13 20:47:08.463789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.543 [2024-05-13 20:47:08.463796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.543 [2024-05-13 20:47:08.467446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.543 [2024-05-13 20:47:08.476131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.543 [2024-05-13 20:47:08.476797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.543 [2024-05-13 20:47:08.477160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.543 [2024-05-13 20:47:08.477172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.543 [2024-05-13 20:47:08.477182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.543 [2024-05-13 20:47:08.477429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.543 [2024-05-13 20:47:08.477650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.543 [2024-05-13 20:47:08.477658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.543 [2024-05-13 20:47:08.477665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.543 [2024-05-13 20:47:08.481183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.490086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.490675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.491055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.491068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.491077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.491323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.806 [2024-05-13 20:47:08.491544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.806 [2024-05-13 20:47:08.491553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.806 [2024-05-13 20:47:08.491561] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.806 [2024-05-13 20:47:08.495081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.503973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.504555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.504887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.504900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.504909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.505146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.806 [2024-05-13 20:47:08.505373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.806 [2024-05-13 20:47:08.505382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.806 [2024-05-13 20:47:08.505389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.806 [2024-05-13 20:47:08.508898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.517798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.518530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.518891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.518904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.518913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.519150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.806 [2024-05-13 20:47:08.519377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.806 [2024-05-13 20:47:08.519387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.806 [2024-05-13 20:47:08.519394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.806 [2024-05-13 20:47:08.522908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.531612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.532268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.532633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.532647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.532656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.532893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.806 [2024-05-13 20:47:08.533113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.806 [2024-05-13 20:47:08.533122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.806 [2024-05-13 20:47:08.533129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.806 [2024-05-13 20:47:08.536653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.545369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.546087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.546461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.546475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.546484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.546721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.806 [2024-05-13 20:47:08.546941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.806 [2024-05-13 20:47:08.546949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.806 [2024-05-13 20:47:08.546956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.806 [2024-05-13 20:47:08.550478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.559181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.559799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.560162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.560174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.560184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.560427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.806 [2024-05-13 20:47:08.560647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.806 [2024-05-13 20:47:08.560656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.806 [2024-05-13 20:47:08.560663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.806 [2024-05-13 20:47:08.564182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.806 [2024-05-13 20:47:08.573094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.806 [2024-05-13 20:47:08.573756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.574129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.806 [2024-05-13 20:47:08.574142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.806 [2024-05-13 20:47:08.574151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.806 [2024-05-13 20:47:08.574397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.574618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.574626] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.574633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.578143] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.587047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.587716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.588089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.588103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.588112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.588357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.588578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.588586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.588593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.592104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.601001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.601663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.602024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.602037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.602046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.602283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.602510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.602519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.602526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.606039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.614945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.615520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.615888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.615901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.615910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.616148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.616373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.616383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.616390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.619920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.628815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.629473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.629847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.629860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.629869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.630106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.630334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.630343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.630350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.633868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.642678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.643380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.643801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.643813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.643822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.644059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.644279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.644288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.644295] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.647818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.656518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.657132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.657427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.657438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.657446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.657664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.657881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.657889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.657895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.661406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.670297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.670861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.671198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.671207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.671219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.671440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.671656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.671664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.671671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.675180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.684070] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.684691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.685029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.685039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.685046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.685263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.685485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.685494] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.685501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.689007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.697917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.698624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.698987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.807 [2024-05-13 20:47:08.699001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.807 [2024-05-13 20:47:08.699010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.807 [2024-05-13 20:47:08.699246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.807 [2024-05-13 20:47:08.699475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.807 [2024-05-13 20:47:08.699484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.807 [2024-05-13 20:47:08.699491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.807 [2024-05-13 20:47:08.703001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.807 [2024-05-13 20:47:08.711696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.807 [2024-05-13 20:47:08.712356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.808 [2024-05-13 20:47:08.712743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.808 [2024-05-13 20:47:08.712756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.808 [2024-05-13 20:47:08.712765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.808 [2024-05-13 20:47:08.713006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.808 [2024-05-13 20:47:08.713226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.808 [2024-05-13 20:47:08.713235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.808 [2024-05-13 20:47:08.713242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.808 [2024-05-13 20:47:08.716765] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.808 [2024-05-13 20:47:08.725470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.808 [2024-05-13 20:47:08.726169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.808 [2024-05-13 20:47:08.726556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.808 [2024-05-13 20:47:08.726570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.808 [2024-05-13 20:47:08.726579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.808 [2024-05-13 20:47:08.726816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.808 [2024-05-13 20:47:08.727036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.808 [2024-05-13 20:47:08.727045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.808 [2024-05-13 20:47:08.727052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.808 [2024-05-13 20:47:08.730570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.808 [2024-05-13 20:47:08.739286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:52.808 [2024-05-13 20:47:08.739823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.808 [2024-05-13 20:47:08.740117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.808 [2024-05-13 20:47:08.740126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:52.808 [2024-05-13 20:47:08.740134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:52.808 [2024-05-13 20:47:08.740356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:52.808 [2024-05-13 20:47:08.740574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:52.808 [2024-05-13 20:47:08.740581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:52.808 [2024-05-13 20:47:08.740588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:52.808 [2024-05-13 20:47:08.744101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.753201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.753787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.754164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.754173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.754181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.754409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.754627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.754634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.754641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.758149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.767040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.767688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.768096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.768109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.768118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.768363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.768584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.768592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.768599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.772111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.780804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.781432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.781820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.781832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.781841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.782078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.782298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.782306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.782324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.785839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.794750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.795323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.795576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.795586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.795593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.795811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.796035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.796043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.796049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.799560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.808660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.809225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.809580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.809590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.809597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.809814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.810030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.810038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.810045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.813558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.822463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.823074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.823415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.823425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.823432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.823649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.823866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.823882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.823889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.827398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.836294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.837030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.837396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.837410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.837419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.837656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.837877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.837885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.837897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.841422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.850156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.850787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.851126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.851135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.851143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.070 [2024-05-13 20:47:08.851364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.070 [2024-05-13 20:47:08.851582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.070 [2024-05-13 20:47:08.851596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.070 [2024-05-13 20:47:08.851603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.070 [2024-05-13 20:47:08.855116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.070 [2024-05-13 20:47:08.864020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.070 [2024-05-13 20:47:08.864600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.864910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.070 [2024-05-13 20:47:08.864919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.070 [2024-05-13 20:47:08.864927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.865143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.865363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.865372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.865379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.868886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.877781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.878343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.878700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.878709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.878716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.878933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.879149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.879156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.879167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.882677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.891579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.892193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.892545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.892555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.892562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.892780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.892996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.893004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.893010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.896522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.905420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.906053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.906429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.906443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.906452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.906689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.906909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.906918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.906925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.910439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.919362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.919951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.920295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.920305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.920318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.920535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.920752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.920760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.920766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.924283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.933197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.933888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.934249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.934261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.934271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.934515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.934737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.934746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.934753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.938268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.946974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.947625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.947993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.948005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.948015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.948252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.948480] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.948489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.948497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.952008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.960908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.961590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.961907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.961920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.961930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.962167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.962395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.962404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.962411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.965928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.974837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.975430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.975757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.975770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.975779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.976015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.976235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.976244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.976251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.979771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.071 [2024-05-13 20:47:08.988670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.071 [2024-05-13 20:47:08.989392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.989672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.071 [2024-05-13 20:47:08.989684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.071 [2024-05-13 20:47:08.989694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.071 [2024-05-13 20:47:08.989931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.071 [2024-05-13 20:47:08.990151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.071 [2024-05-13 20:47:08.990159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.071 [2024-05-13 20:47:08.990166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.071 [2024-05-13 20:47:08.993683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.072 [2024-05-13 20:47:09.002597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.072 [2024-05-13 20:47:09.003257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.072 [2024-05-13 20:47:09.003663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.072 [2024-05-13 20:47:09.003677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.072 [2024-05-13 20:47:09.003687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.072 [2024-05-13 20:47:09.003923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.072 [2024-05-13 20:47:09.004143] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.072 [2024-05-13 20:47:09.004152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.072 [2024-05-13 20:47:09.004159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.072 [2024-05-13 20:47:09.007675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.016367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.016981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.017362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.017373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.017380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.017598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.017815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.017823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.017829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.021357] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.030251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.030958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.031325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.031338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.031348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.031585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.031805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.031814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.031821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.035337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.044024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.044376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.044751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.044762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.044770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.044989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.045206] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.045213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.045220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.048730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.057831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.058594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.058960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.058972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.058986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.059223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.059450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.059459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.059466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.062977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.071663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.072271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.072639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.072649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.072657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.072874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.073091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.073099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.073105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.076614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.085507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.086117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.086453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.086463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.086471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.086688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.086904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.086912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.086919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.090426] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.099317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.099920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.100277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.100286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.100297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.100519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.100736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.100743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.100750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.104253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.113150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.113813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.114210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.114222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.114231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.114475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.114696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.114705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.114713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.118225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.126928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.127602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.127966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.127978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.335 [2024-05-13 20:47:09.127987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.335 [2024-05-13 20:47:09.128224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.335 [2024-05-13 20:47:09.128451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.335 [2024-05-13 20:47:09.128460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.335 [2024-05-13 20:47:09.128467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.335 [2024-05-13 20:47:09.131980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.335 [2024-05-13 20:47:09.140877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.335 [2024-05-13 20:47:09.141589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.335 [2024-05-13 20:47:09.141954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.141967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.141976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.142217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.142445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.142454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.142462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.145977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.154682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.155249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.155603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.155614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.155622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.155839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.156056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.156063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.156070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.159586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.168484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.169170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.169556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.169570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.169579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.169816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.170036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.170044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.170051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.173570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.182270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.182981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.183341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.183355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.183364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.183601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.183825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.183835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.183842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.187359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.196053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.196672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.197016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.197026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.197033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.197251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.197470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.197479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.197486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.200998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.209897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.210612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.210979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.210991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.211001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.211237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.211466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.211475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.211483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.214996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.223700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.224390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.224765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.224777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.224787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.225023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.225244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.225256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.225264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.228783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.237477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.238160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.238553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.238568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.238577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.238814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.239034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.239043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.239050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.242570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.251264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.251859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.252225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.252237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.252246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.252489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.252709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.252718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.336 [2024-05-13 20:47:09.252725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.336 [2024-05-13 20:47:09.256242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.336 [2024-05-13 20:47:09.265150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.336 [2024-05-13 20:47:09.265808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.266171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.336 [2024-05-13 20:47:09.266183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.336 [2024-05-13 20:47:09.266192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.336 [2024-05-13 20:47:09.266436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.336 [2024-05-13 20:47:09.266658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.336 [2024-05-13 20:47:09.266667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.337 [2024-05-13 20:47:09.266678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.337 [2024-05-13 20:47:09.270187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.279087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.600 [2024-05-13 20:47:09.279760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.280122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.280135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.600 [2024-05-13 20:47:09.280144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.600 [2024-05-13 20:47:09.280387] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.600 [2024-05-13 20:47:09.280608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.600 [2024-05-13 20:47:09.280617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.600 [2024-05-13 20:47:09.280624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.600 [2024-05-13 20:47:09.284139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.293038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.600 [2024-05-13 20:47:09.293578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.293935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.293945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.600 [2024-05-13 20:47:09.293953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.600 [2024-05-13 20:47:09.294170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.600 [2024-05-13 20:47:09.294392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.600 [2024-05-13 20:47:09.294400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.600 [2024-05-13 20:47:09.294407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.600 [2024-05-13 20:47:09.297917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.306814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.600 [2024-05-13 20:47:09.307507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.307872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.307884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.600 [2024-05-13 20:47:09.307893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.600 [2024-05-13 20:47:09.308129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.600 [2024-05-13 20:47:09.308355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.600 [2024-05-13 20:47:09.308364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.600 [2024-05-13 20:47:09.308371] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.600 [2024-05-13 20:47:09.311883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.320595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.600 [2024-05-13 20:47:09.321172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.321516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.321526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.600 [2024-05-13 20:47:09.321534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.600 [2024-05-13 20:47:09.321752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.600 [2024-05-13 20:47:09.321968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.600 [2024-05-13 20:47:09.321976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.600 [2024-05-13 20:47:09.321983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.600 [2024-05-13 20:47:09.325498] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.334393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.600 [2024-05-13 20:47:09.335034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.335417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.335431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.600 [2024-05-13 20:47:09.335440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.600 [2024-05-13 20:47:09.335678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.600 [2024-05-13 20:47:09.335898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.600 [2024-05-13 20:47:09.335907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.600 [2024-05-13 20:47:09.335914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.600 [2024-05-13 20:47:09.339434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.348334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.600 [2024-05-13 20:47:09.348998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.349361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.600 [2024-05-13 20:47:09.349375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.600 [2024-05-13 20:47:09.349384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.600 [2024-05-13 20:47:09.349620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.600 [2024-05-13 20:47:09.349841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.600 [2024-05-13 20:47:09.349849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.600 [2024-05-13 20:47:09.349856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.600 [2024-05-13 20:47:09.353372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.600 [2024-05-13 20:47:09.362268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.362949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.363323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.363336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.363345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.363582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.363802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.363810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.363817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.367338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.376035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.376654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.377069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.377079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.377086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.377303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.377523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.377532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.377539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.381048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.389951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.390615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.390980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.390993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.391002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.391239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.391465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.391475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.391482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.394990] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.403882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.404527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.404893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.404906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.404915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.405151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.405378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.405387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.405395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.408904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.417798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.418451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.418820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.418832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.418841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.419078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.419298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.419306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.419331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.422845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.431570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.432281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.432671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.432685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.432694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.432930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.433150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.433158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.433165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.436685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.445379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.446086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.446474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.446496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.446505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.446742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.446962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.446971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.446978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.450498] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.459193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.459863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.460228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.460241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.460250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.460495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.460716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.460724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.460731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.464239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.473129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.473804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.474172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.474185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.474194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.474438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.474659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.601 [2024-05-13 20:47:09.474667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.601 [2024-05-13 20:47:09.474675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.601 [2024-05-13 20:47:09.478188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.601 [2024-05-13 20:47:09.486893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.601 [2024-05-13 20:47:09.487477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.487656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.601 [2024-05-13 20:47:09.487669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.601 [2024-05-13 20:47:09.487681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.601 [2024-05-13 20:47:09.487900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.601 [2024-05-13 20:47:09.488118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.602 [2024-05-13 20:47:09.488126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.602 [2024-05-13 20:47:09.488134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.602 [2024-05-13 20:47:09.491791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.602 [2024-05-13 20:47:09.500688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.602 [2024-05-13 20:47:09.501394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.602 [2024-05-13 20:47:09.501835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.602 [2024-05-13 20:47:09.501848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.602 [2024-05-13 20:47:09.501857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.602 [2024-05-13 20:47:09.502094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.602 [2024-05-13 20:47:09.502321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.602 [2024-05-13 20:47:09.502330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.602 [2024-05-13 20:47:09.502337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.602 [2024-05-13 20:47:09.505846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.602 [2024-05-13 20:47:09.514531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.602 [2024-05-13 20:47:09.515185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.602 [2024-05-13 20:47:09.515557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.602 [2024-05-13 20:47:09.515571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.602 [2024-05-13 20:47:09.515580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.602 [2024-05-13 20:47:09.515817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.602 [2024-05-13 20:47:09.516037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.602 [2024-05-13 20:47:09.516045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.602 [2024-05-13 20:47:09.516052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.602 [2024-05-13 20:47:09.519575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.602 [2024-05-13 20:47:09.528473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.602 [2024-05-13 20:47:09.529142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.602 [2024-05-13 20:47:09.529501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.602 [2024-05-13 20:47:09.529515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.602 [2024-05-13 20:47:09.529525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.602 [2024-05-13 20:47:09.529765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.602 [2024-05-13 20:47:09.529985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.602 [2024-05-13 20:47:09.530000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.602 [2024-05-13 20:47:09.530007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.602 [2024-05-13 20:47:09.533522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.602 [2024-05-13 20:47:09.542415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.543084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.543470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.543484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.543493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.543730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.543950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.543959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.543966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.547487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.556183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.556855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.557223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.557235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.557245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.557490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.557711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.557720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.557727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.561242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.569939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.570629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.570997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.571010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.571019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.571257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.571491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.571500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.571508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.575020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.583705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.584159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.584514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.584524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.584532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.584750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.584966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.584974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.584981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.588492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.597597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.598157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.598386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.598396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.598403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.598620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.598836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.598844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.598851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.602362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.611464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.612167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.612541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.612554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.612564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.612801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.613021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.613033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.613041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.616562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.625334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.626003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.626370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.626383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.626392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.626629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.626850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.626858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.626865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.630379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.639294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.639962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.640344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.640358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.640367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.640604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.640824] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.640833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.640840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.644359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.866 [2024-05-13 20:47:09.653051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.866 [2024-05-13 20:47:09.653732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.654104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.866 [2024-05-13 20:47:09.654116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.866 [2024-05-13 20:47:09.654125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.866 [2024-05-13 20:47:09.654369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.866 [2024-05-13 20:47:09.654590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.866 [2024-05-13 20:47:09.654598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.866 [2024-05-13 20:47:09.654610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.866 [2024-05-13 20:47:09.658126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.666821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.667604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.668023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.668036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.668045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.668282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.668510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.668519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.668526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.672035] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.680721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.681415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.681792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.681804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.681814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.682051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.682271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.682279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.682286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.685806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.694490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.695213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.695594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.695607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.695617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.695854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.696074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.696083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.696090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.699612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.708412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.709123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.709503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.709516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.709525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.709762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.709982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.709991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.709998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.713517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.722212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.722883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.723161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.723174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.723183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.723427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.723648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.723656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.723663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.727173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.736069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.736734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.737104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.737117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.737126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.737371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.737592] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.737600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.737607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.741126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.750018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.750699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.751066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.751079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.751088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.751333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.751555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.751563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.751570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.755084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.763975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.764642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.765013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.765025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.765034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.765271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.765499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.765508] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.765515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.769031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.777924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.778624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.778992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.779005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.779014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.779250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.779479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.867 [2024-05-13 20:47:09.779488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.867 [2024-05-13 20:47:09.779495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.867 [2024-05-13 20:47:09.783008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.867 [2024-05-13 20:47:09.791695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.867 [2024-05-13 20:47:09.792402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.792775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.867 [2024-05-13 20:47:09.792789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.867 [2024-05-13 20:47:09.792798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.867 [2024-05-13 20:47:09.793034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.867 [2024-05-13 20:47:09.793254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.868 [2024-05-13 20:47:09.793262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.868 [2024-05-13 20:47:09.793270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.868 [2024-05-13 20:47:09.796789] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.868 [2024-05-13 20:47:09.805477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.868 [2024-05-13 20:47:09.806020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.868 [2024-05-13 20:47:09.806405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.868 [2024-05-13 20:47:09.806419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:53.868 [2024-05-13 20:47:09.806428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:53.868 [2024-05-13 20:47:09.806665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:53.868 [2024-05-13 20:47:09.806885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.868 [2024-05-13 20:47:09.806893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.868 [2024-05-13 20:47:09.806900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.131 [2024-05-13 20:47:09.810422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.131 [2024-05-13 20:47:09.819340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.131 [2024-05-13 20:47:09.819993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.820362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.820376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.131 [2024-05-13 20:47:09.820385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.131 [2024-05-13 20:47:09.820622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.131 [2024-05-13 20:47:09.820842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.131 [2024-05-13 20:47:09.820850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.131 [2024-05-13 20:47:09.820858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.131 [2024-05-13 20:47:09.824374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.131 [2024-05-13 20:47:09.833265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.131 [2024-05-13 20:47:09.833984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.834351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.834369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.131 [2024-05-13 20:47:09.834379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.131 [2024-05-13 20:47:09.834616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.131 [2024-05-13 20:47:09.835047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.131 [2024-05-13 20:47:09.835058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.131 [2024-05-13 20:47:09.835066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.131 [2024-05-13 20:47:09.838588] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.131 [2024-05-13 20:47:09.847080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.131 [2024-05-13 20:47:09.847779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.848147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.848160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.131 [2024-05-13 20:47:09.848169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.131 [2024-05-13 20:47:09.848414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.131 [2024-05-13 20:47:09.848635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.131 [2024-05-13 20:47:09.848643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.131 [2024-05-13 20:47:09.848650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.131 [2024-05-13 20:47:09.852161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.131 [2024-05-13 20:47:09.860845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.131 [2024-05-13 20:47:09.861614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.861985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.131 [2024-05-13 20:47:09.861998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.131 [2024-05-13 20:47:09.862007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.862244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.862471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.862481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.862488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.866006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.874697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.875324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.875669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.875679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.875690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.875909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.876125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.876140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.876147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.879659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.888546] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.889099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.889490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.889504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.889513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.889750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.889970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.889978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.889985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.893504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.902407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.903095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.903423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.903437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.903446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.903683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.903903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.903912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.903920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.907440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.916337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.916945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.917329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.917342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.917351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.917592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.917812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.917820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.917828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.921354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.930250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.930876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.931212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.931221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.931229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.931451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.931668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.931676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.931683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.935192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.944111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.944778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.945146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.945158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.945168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.945413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.945633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.945642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.945649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.132 [2024-05-13 20:47:09.949159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.132 [2024-05-13 20:47:09.958052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.132 [2024-05-13 20:47:09.958736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.959070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.132 [2024-05-13 20:47:09.959083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.132 [2024-05-13 20:47:09.959092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.132 [2024-05-13 20:47:09.959336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.132 [2024-05-13 20:47:09.959561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.132 [2024-05-13 20:47:09.959570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.132 [2024-05-13 20:47:09.959577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:09.963086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:09.971977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:09.972669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:09.973040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:09.973053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:09.973062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:09.973298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:09.973526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:09.973535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:09.973543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:09.977060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:09.985750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:09.986540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:09.986775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:09.986789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:09.986798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:09.987035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:09.987256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:09.987264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:09.987271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:09.990790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:09.999681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:10.000396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.000854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.000867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:10.000876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:10.001588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:10.001812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:10.001825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:10.001833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:10.005358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:10.013434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:10.014171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.014564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.014578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:10.014587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:10.014824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:10.015045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:10.015053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:10.015061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:10.018585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:10.027280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:10.027985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.028366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.028380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:10.028389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:10.028626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:10.028846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:10.028855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:10.028863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:10.032380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:10.041076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:10.041810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.042185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.042197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:10.042207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:10.042452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:10.042673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:10.042682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:10.042693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:10.046205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:10.054896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:10.055621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.056004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.056017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:10.056026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:10.056262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:10.056490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:10.056500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:10.056507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:10.060016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.133 [2024-05-13 20:47:10.068715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.133 [2024-05-13 20:47:10.069349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.069620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.133 [2024-05-13 20:47:10.069632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.133 [2024-05-13 20:47:10.069641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.133 [2024-05-13 20:47:10.069878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.133 [2024-05-13 20:47:10.070098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.133 [2024-05-13 20:47:10.070107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.133 [2024-05-13 20:47:10.070114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.133 [2024-05-13 20:47:10.073633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.082531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.083153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.083507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.083517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.083525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.083744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.083960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.083968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.083975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.087497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.096403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.096926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.097249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.097259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.097266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.097490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.097707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.097715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.097722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.101228] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.110329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.110854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.111205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.111214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.111222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.111444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.111661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.111669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.111676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.115183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.124089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.124723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.124961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.124973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.124983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.125220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.125446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.125456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.125463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.128973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.137911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.138510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.138880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.138890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.138897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.139115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.139335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.139343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.139350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.142866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.151761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.152417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.152767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.152780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.152789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.153026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.153246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.153254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.153262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.156788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.165682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.166351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.166794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.166807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.166816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.167053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.167272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.167281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.167288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.170812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.179499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.180210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.180613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.180627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.180637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.180874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.181094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.181102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.181109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.184627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.397 [2024-05-13 20:47:10.193326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.397 [2024-05-13 20:47:10.193990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.194402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-05-13 20:47:10.194416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.397 [2024-05-13 20:47:10.194425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.397 [2024-05-13 20:47:10.194662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.397 [2024-05-13 20:47:10.194882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.397 [2024-05-13 20:47:10.194890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.397 [2024-05-13 20:47:10.194898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.397 [2024-05-13 20:47:10.198418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.207114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.207713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.208085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.208098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.208107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.208352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.208573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.208581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.208588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.212100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.221002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.221662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.222032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.222049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.222058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.222295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.222523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.222532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.222539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.226048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.234951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.235432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.235746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.235760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.235769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.236005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.236225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.236234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.236241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.239764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.248865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.249517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.249887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.249900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.249909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.250146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.250373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.250382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.250389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.253905] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.262803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.263423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.263839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.263852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.263866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.264103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.264330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.264339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.264346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.267858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.276759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.277445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.277869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.277882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.277892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.278128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.278357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.278366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.278373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.281885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.290572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.291220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.291737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.291751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.291760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.291997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.292218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.292226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.292234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.295751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.304440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.305020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.305243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.305253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.305261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.305487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.305705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.305713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.305720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.309227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.318325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.318936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.319272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.319281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.319288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.319518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.398 [2024-05-13 20:47:10.319735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.398 [2024-05-13 20:47:10.319743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.398 [2024-05-13 20:47:10.319750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.398 [2024-05-13 20:47:10.323262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.398 [2024-05-13 20:47:10.332162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.398 [2024-05-13 20:47:10.332841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.333224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-05-13 20:47:10.333236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.398 [2024-05-13 20:47:10.333245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.398 [2024-05-13 20:47:10.333490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.399 [2024-05-13 20:47:10.333711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.399 [2024-05-13 20:47:10.333720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.399 [2024-05-13 20:47:10.333727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.399 [2024-05-13 20:47:10.337239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.661 [2024-05-13 20:47:10.345931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.661 [2024-05-13 20:47:10.346634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-05-13 20:47:10.346959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-05-13 20:47:10.346972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.661 [2024-05-13 20:47:10.346981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.661 [2024-05-13 20:47:10.347218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.661 [2024-05-13 20:47:10.347451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.661 [2024-05-13 20:47:10.347462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.661 [2024-05-13 20:47:10.347469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.661 [2024-05-13 20:47:10.350983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.661 [2024-05-13 20:47:10.359690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.661 [2024-05-13 20:47:10.360398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-05-13 20:47:10.360781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-05-13 20:47:10.360794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.661 [2024-05-13 20:47:10.360803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.661 [2024-05-13 20:47:10.361040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.661 [2024-05-13 20:47:10.361260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.661 [2024-05-13 20:47:10.361269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.661 [2024-05-13 20:47:10.361276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.661 [2024-05-13 20:47:10.364796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.661 [2024-05-13 20:47:10.373489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.661 [2024-05-13 20:47:10.374152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-05-13 20:47:10.374540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.661 [2024-05-13 20:47:10.374554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.661 [2024-05-13 20:47:10.374564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.661 [2024-05-13 20:47:10.374801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.661 [2024-05-13 20:47:10.375021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.661 [2024-05-13 20:47:10.375029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.375037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.378559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.387248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.387962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.388343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.388357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.388367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.388604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.388823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.388836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.388844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.392362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.401059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.401789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.402159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.402172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.402181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.402423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.402643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.402652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.402659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.406176] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.414871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.415567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.415933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.415946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.415955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.416191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.416418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.416428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.416435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.419956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.428654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.429111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.429556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.429593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.429604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.429841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.430062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.430070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.430082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.433603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.442511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.443237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.443647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.443662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.443671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.443908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.444128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.444137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.444144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.447665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.456400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.456968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.457224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.457234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.457242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.457469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.457689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.457696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.457703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.461207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.470307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.470983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.471357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.471371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.471380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.662 [2024-05-13 20:47:10.471618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.662 [2024-05-13 20:47:10.471838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.662 [2024-05-13 20:47:10.471847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.662 [2024-05-13 20:47:10.471854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.662 [2024-05-13 20:47:10.475380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.662 [2024-05-13 20:47:10.484079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.662 [2024-05-13 20:47:10.484751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.485085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.662 [2024-05-13 20:47:10.485097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.662 [2024-05-13 20:47:10.485106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.485351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.485572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.485580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.485587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.489098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 [2024-05-13 20:47:10.497996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 [2024-05-13 20:47:10.498677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.499046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.499059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.499068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.499305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.499535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.499543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.499551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.503063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 [2024-05-13 20:47:10.511764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 [2024-05-13 20:47:10.512422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.512807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.512819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.512828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.513065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.513285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.513294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.513301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.516821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 [2024-05-13 20:47:10.525534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 [2024-05-13 20:47:10.526093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.526367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.526381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.526391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.526629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.526849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.526857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.526864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.530378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 [2024-05-13 20:47:10.539308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 [2024-05-13 20:47:10.539981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.540348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.540361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.540371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.540607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.540827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.540835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.540842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.544362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 [2024-05-13 20:47:10.553262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 [2024-05-13 20:47:10.553842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.554099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.554112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.554120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.554342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.554560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.554567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.554574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.558087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 [2024-05-13 20:47:10.567193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 [2024-05-13 20:47:10.567653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.568042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.568052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.568059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.568276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.568497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.568505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.568512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.572023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3294077 Killed "${NVMF_APP[@]}" "$@" 00:33:54.663 20:47:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:54.663 20:47:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:54.663 20:47:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.663 [2024-05-13 20:47:10.581128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.663 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:54.663 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.663 [2024-05-13 20:47:10.581701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.582042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.663 [2024-05-13 20:47:10.582052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.663 [2024-05-13 20:47:10.582059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.663 [2024-05-13 20:47:10.582275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.663 [2024-05-13 20:47:10.582496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.663 [2024-05-13 20:47:10.582505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.663 [2024-05-13 20:47:10.582512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.663 [2024-05-13 20:47:10.586017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.663 20:47:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3295601 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3295601 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3295601 ']' 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:54.664 20:47:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.664 [2024-05-13 20:47:10.594921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.664 [2024-05-13 20:47:10.595419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-05-13 20:47:10.595801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.664 [2024-05-13 20:47:10.595812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.664 [2024-05-13 20:47:10.595820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.664 [2024-05-13 20:47:10.596037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.664 [2024-05-13 20:47:10.596255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.664 [2024-05-13 20:47:10.596264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.664 [2024-05-13 20:47:10.596271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.664 [2024-05-13 20:47:10.599787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.608698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.609273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.609536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.609546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.609553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.609770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.609986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.609994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.610001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.613511] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.622630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.623199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.623462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.623471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.623479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.623696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.623912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.623921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.623928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.627441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.636550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.637201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.637252] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:33:54.927 [2024-05-13 20:47:10.637301] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.927 [2024-05-13 20:47:10.637519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.637534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.637544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.637782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.638002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.638010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.638018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.641535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.650439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.650943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.651331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.651343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.651351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.651569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.651786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.651795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.651802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.655321] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.664225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.664754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.665126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.665135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.665142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.665367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.665585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.665593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.665600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.669107] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.927 [2024-05-13 20:47:10.678017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.678409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.678777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.678787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.678795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.679012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.679228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.679235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.679242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.682759] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.691869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.692539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.692842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.692855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.692865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.693102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.693330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.927 [2024-05-13 20:47:10.693339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.927 [2024-05-13 20:47:10.693346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.927 [2024-05-13 20:47:10.696859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.927 [2024-05-13 20:47:10.704454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.927 [2024-05-13 20:47:10.705770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.927 [2024-05-13 20:47:10.706347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.706706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.927 [2024-05-13 20:47:10.706716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.927 [2024-05-13 20:47:10.706724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.927 [2024-05-13 20:47:10.706942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.927 [2024-05-13 20:47:10.707160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.707168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.707175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.710695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.719614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.720144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.720590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.720601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.720610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.720827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.721044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.721052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.721059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.724574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.733479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.734061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.734459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.734470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.734477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.734697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.734914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.734921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.734928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.738439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.747338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.747903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.748267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.748277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.748284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.748506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.748723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.748731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.748737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.752245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.758759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.928 [2024-05-13 20:47:10.758786] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.928 [2024-05-13 20:47:10.758795] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.928 [2024-05-13 20:47:10.758800] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.928 [2024-05-13 20:47:10.758804] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.928 [2024-05-13 20:47:10.758951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.928 [2024-05-13 20:47:10.759085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.928 [2024-05-13 20:47:10.759086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.928 [2024-05-13 20:47:10.761259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.761807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.762030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.762039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.762047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.762264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.762485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.762494] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.762501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.766005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.775114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.775668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.776048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.776058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.776066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.776283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.776504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.776513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.776520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.780027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.788930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.789596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.789877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.789892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.789902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.790148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.790380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.790390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.790397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.793919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.802822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.803639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.804007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.804020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.804030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.804269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.804497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.804506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.804514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.808022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.816717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.817211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.817655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.817670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.817680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.817918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.818138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.928 [2024-05-13 20:47:10.818146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.928 [2024-05-13 20:47:10.818154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.928 [2024-05-13 20:47:10.821689] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.928 [2024-05-13 20:47:10.830590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.928 [2024-05-13 20:47:10.831140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.831562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.928 [2024-05-13 20:47:10.831573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.928 [2024-05-13 20:47:10.831580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.928 [2024-05-13 20:47:10.831797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.928 [2024-05-13 20:47:10.832013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.929 [2024-05-13 20:47:10.832025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.929 [2024-05-13 20:47:10.832032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.929 [2024-05-13 20:47:10.835809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.929 [2024-05-13 20:47:10.844507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.929 [2024-05-13 20:47:10.845085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.929 [2024-05-13 20:47:10.845428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.929 [2024-05-13 20:47:10.845438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.929 [2024-05-13 20:47:10.845445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.929 [2024-05-13 20:47:10.845663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.929 [2024-05-13 20:47:10.845879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.929 [2024-05-13 20:47:10.845887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.929 [2024-05-13 20:47:10.845894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.929 [2024-05-13 20:47:10.849403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.929 [2024-05-13 20:47:10.858295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.929 [2024-05-13 20:47:10.858920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.929 [2024-05-13 20:47:10.859297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.929 [2024-05-13 20:47:10.859310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:54.929 [2024-05-13 20:47:10.859326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:54.929 [2024-05-13 20:47:10.859564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:54.929 [2024-05-13 20:47:10.859784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.929 [2024-05-13 20:47:10.859793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.929 [2024-05-13 20:47:10.859801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.929 [2024-05-13 20:47:10.863318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.192 [2024-05-13 20:47:10.872212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.192 [2024-05-13 20:47:10.872750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.873101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.873111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.192 [2024-05-13 20:47:10.873118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.192 [2024-05-13 20:47:10.873343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.192 [2024-05-13 20:47:10.873561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.192 [2024-05-13 20:47:10.873575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.192 [2024-05-13 20:47:10.873587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.192 [2024-05-13 20:47:10.877095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.192 [2024-05-13 20:47:10.885989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.192 [2024-05-13 20:47:10.886417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.886627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.886636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.192 [2024-05-13 20:47:10.886644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.192 [2024-05-13 20:47:10.886860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.192 [2024-05-13 20:47:10.887077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.192 [2024-05-13 20:47:10.887084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.192 [2024-05-13 20:47:10.887091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.192 [2024-05-13 20:47:10.890605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.192 [2024-05-13 20:47:10.899912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.192 [2024-05-13 20:47:10.900636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.901054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.901066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.192 [2024-05-13 20:47:10.901076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.192 [2024-05-13 20:47:10.901321] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.192 [2024-05-13 20:47:10.901542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.192 [2024-05-13 20:47:10.901550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.192 [2024-05-13 20:47:10.901558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.192 [2024-05-13 20:47:10.905069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.192 [2024-05-13 20:47:10.913760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.192 [2024-05-13 20:47:10.914449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.914884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.192 [2024-05-13 20:47:10.914896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.192 [2024-05-13 20:47:10.914906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.192 [2024-05-13 20:47:10.915142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.192 [2024-05-13 20:47:10.915373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.192 [2024-05-13 20:47:10.915382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.915390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:10.918907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:10.927611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:10.928363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.928806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.928819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:10.928828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:10.929064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:10.929284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:10.929293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.929300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:10.932822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:10.941516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:10.941828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.942183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.942193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:10.942202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:10.942429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:10.942647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:10.942655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.942662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:10.946172] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:10.955275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:10.955999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.956323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.956337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:10.956346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:10.956583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:10.956802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:10.956811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.956818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:10.960333] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:10.969037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:10.969725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.970099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.970111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:10.970120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:10.970365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:10.970586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:10.970594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.970601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:10.974113] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:10.982810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:10.983429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.983867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.983880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:10.983890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:10.984126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:10.984354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:10.984363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.984371] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:10.987884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:10.996577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:10.997260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.997688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:10.997701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:10.997711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:10.997948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:10.998168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:10.998176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:10.998183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:11.001701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:11.010395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:11.011078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:11.011465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:11.011479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:11.011489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:11.011726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:11.011947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:11.011956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:11.011963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:11.015484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:11.024195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:11.024601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:11.024978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:11.024988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:11.024995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:11.025213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:11.025434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.193 [2024-05-13 20:47:11.025444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.193 [2024-05-13 20:47:11.025450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.193 [2024-05-13 20:47:11.028961] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.193 [2024-05-13 20:47:11.038069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.193 [2024-05-13 20:47:11.038627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:11.038898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.193 [2024-05-13 20:47:11.038911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.193 [2024-05-13 20:47:11.038920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.193 [2024-05-13 20:47:11.039157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.193 [2024-05-13 20:47:11.039385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.039394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.039401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.042912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.194 [2024-05-13 20:47:11.052024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.194 [2024-05-13 20:47:11.052641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.052865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.052879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.194 [2024-05-13 20:47:11.052887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.194 [2024-05-13 20:47:11.053105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.194 [2024-05-13 20:47:11.053327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.053336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.053342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.056854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.194 [2024-05-13 20:47:11.065954] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.194 [2024-05-13 20:47:11.066646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.066896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.066909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.194 [2024-05-13 20:47:11.066918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.194 [2024-05-13 20:47:11.067155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.194 [2024-05-13 20:47:11.067382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.067391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.067398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.070911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.194 [2024-05-13 20:47:11.079809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.194 [2024-05-13 20:47:11.080264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.080615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.080627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.194 [2024-05-13 20:47:11.080635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.194 [2024-05-13 20:47:11.080852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.194 [2024-05-13 20:47:11.081068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.081076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.081083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.084595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.194 [2024-05-13 20:47:11.093698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.194 [2024-05-13 20:47:11.094400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.094821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.094834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.194 [2024-05-13 20:47:11.094847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.194 [2024-05-13 20:47:11.095084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.194 [2024-05-13 20:47:11.095304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.095320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.095328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.098845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.194 [2024-05-13 20:47:11.107541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.194 [2024-05-13 20:47:11.108287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.108673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.108686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.194 [2024-05-13 20:47:11.108696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.194 [2024-05-13 20:47:11.108932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.194 [2024-05-13 20:47:11.109152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.109160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.109167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.112687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.194 [2024-05-13 20:47:11.121389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.194 [2024-05-13 20:47:11.121921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.122308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.194 [2024-05-13 20:47:11.122329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.194 [2024-05-13 20:47:11.122339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.194 [2024-05-13 20:47:11.122575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.194 [2024-05-13 20:47:11.122796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.194 [2024-05-13 20:47:11.122804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.194 [2024-05-13 20:47:11.122811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.194 [2024-05-13 20:47:11.126326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.457 [2024-05-13 20:47:11.135222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.457 [2024-05-13 20:47:11.135907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.457 [2024-05-13 20:47:11.136150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.457 [2024-05-13 20:47:11.136162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.457 [2024-05-13 20:47:11.136172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.457 [2024-05-13 20:47:11.136420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.457 [2024-05-13 20:47:11.136641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.457 [2024-05-13 20:47:11.136649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.457 [2024-05-13 20:47:11.136656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.457 [2024-05-13 20:47:11.140168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.457 [2024-05-13 20:47:11.149066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.457 [2024-05-13 20:47:11.149738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.150076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.150085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.150093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.150310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.150532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.150548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.150555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.154068] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.162960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.163658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.164037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.164049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.164058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.164295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.164520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.164529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.164536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.168054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.176760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.177434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.177820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.177832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.177841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.178078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.178302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.178310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.178325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.181838] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.190530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.191103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.191477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.191488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.191495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.191712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.191929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.191936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.191943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.195459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.204350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.204896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.205145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.205159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.205168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.205412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.205633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.205641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.205649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.209158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.218256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.218800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.219195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.219207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.219216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.219468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.219689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.219702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.219709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.223223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.232113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.232816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.233192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.233205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.233214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.233456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.233677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.233685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.233693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.237205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.245894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.246618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.246992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.247005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.247014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.247250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.247479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.247488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.247496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.251005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.259693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.260393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.260839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.260853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.458 [2024-05-13 20:47:11.260862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.458 [2024-05-13 20:47:11.261098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.458 [2024-05-13 20:47:11.261326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.458 [2024-05-13 20:47:11.261336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.458 [2024-05-13 20:47:11.261347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.458 [2024-05-13 20:47:11.264863] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.458 [2024-05-13 20:47:11.273556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.458 [2024-05-13 20:47:11.274228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.458 [2024-05-13 20:47:11.274557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.274572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.274581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.274818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.275038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.275047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.275054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.278573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.287469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.288044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.288392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.288403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.288410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.288627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.288844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.288852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.288859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.292368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.301256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.301932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.302370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.302384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.302393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.302630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.302850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.302858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.302866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.306390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.315082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.315764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.316150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.316162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.316172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.316416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.316637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.316645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.316652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.320178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.328875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.329570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.329942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.329954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.329964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.330201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.330427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.330437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.330444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.333959] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.342658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.343296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.343764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.343801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.343812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.344049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.344269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.344278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.344286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.347809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.356507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.357138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.357603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.357640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.357651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.357888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.358108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.358117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.358125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.361645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.370342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.370920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.371120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.371130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.371137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.371359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.371578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.371586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.371594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.375102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.384201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.384765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.385139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.385152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.385161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.385404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.385625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.459 [2024-05-13 20:47:11.385634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.459 [2024-05-13 20:47:11.385641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.459 [2024-05-13 20:47:11.389159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.459 [2024-05-13 20:47:11.398072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.459 [2024-05-13 20:47:11.398766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.399000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.459 [2024-05-13 20:47:11.399014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.459 [2024-05-13 20:47:11.399024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.459 [2024-05-13 20:47:11.399261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.459 [2024-05-13 20:47:11.399489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.460 [2024-05-13 20:47:11.399498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.460 [2024-05-13 20:47:11.399505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.403016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 [2024-05-13 20:47:11.411912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.412613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:55.722 [2024-05-13 20:47:11.412987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.413001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.413010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:55.722 [2024-05-13 20:47:11.413247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.413475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.413485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.413492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.722 [2024-05-13 20:47:11.417007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 [2024-05-13 20:47:11.425712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.426297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.426651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.426661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.426668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 [2024-05-13 20:47:11.426886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.427102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.427110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.427116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.430629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 [2024-05-13 20:47:11.439524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.440068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.440444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.440458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.440468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 [2024-05-13 20:47:11.440706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.440925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.440934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.440941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.444462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.722 [2024-05-13 20:47:11.453373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.454051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.454697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.454734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.454745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 [2024-05-13 20:47:11.454982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.455202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.455211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.455218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.458327] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.722 [2024-05-13 20:47:11.458737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.722 [2024-05-13 20:47:11.467215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.467904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.468136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.468151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.468165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 [2024-05-13 20:47:11.468410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.468632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.468641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.468648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.472161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 [2024-05-13 20:47:11.481020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.481470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.481839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.481848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.481856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 [2024-05-13 20:47:11.482073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.482289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.482297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.482304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.485817] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 Malloc0 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.722 [2024-05-13 20:47:11.494915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.722 [2024-05-13 20:47:11.495502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.495849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.722 [2024-05-13 20:47:11.495859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.722 [2024-05-13 20:47:11.495866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.722 [2024-05-13 20:47:11.496083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.722 [2024-05-13 20:47:11.496299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.722 [2024-05-13 20:47:11.496308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.722 [2024-05-13 20:47:11.496321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.722 [2024-05-13 20:47:11.499829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.722 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.722 [2024-05-13 20:47:11.508716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.723 [2024-05-13 20:47:11.509416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.723 [2024-05-13 20:47:11.509757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.723 [2024-05-13 20:47:11.509771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c73080 with addr=10.0.0.2, port=4420 00:33:55.723 [2024-05-13 20:47:11.509780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73080 is same with the state(5) to be set 00:33:55.723 [2024-05-13 20:47:11.510017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73080 (9): Bad file descriptor 00:33:55.723 [2024-05-13 20:47:11.510238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.723 [2024-05-13 20:47:11.510246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.723 [2024-05-13 20:47:11.510253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.723 [2024-05-13 20:47:11.513773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.723 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.723 20:47:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.723 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.723 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.723 [2024-05-13 20:47:11.521938] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:55.723 [2024-05-13 20:47:11.522173] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.723 [2024-05-13 20:47:11.522470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.723 20:47:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.723 20:47:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3294541 00:33:55.723 [2024-05-13 20:47:11.598154] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:05.725 00:34:05.725 Latency(us) 00:34:05.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.725 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:05.725 Verification LBA range: start 0x0 length 0x4000 00:34:05.725 Nvme1n1 : 15.01 7205.13 28.15 9629.77 0.00 7577.54 788.48 16274.77 00:34:05.725 =================================================================================================================== 00:34:05.725 Total : 7205.13 28.15 9629.77 0.00 7577.54 788.48 16274.77 00:34:05.725 20:47:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:05.726 rmmod nvme_tcp 00:34:05.726 rmmod nvme_fabrics 00:34:05.726 rmmod nvme_keyring 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3295601 ']' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3295601 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3295601 ']' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3295601 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3295601 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3295601' 00:34:05.726 killing process with pid 3295601 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3295601 00:34:05.726 [2024-05-13 20:47:20.210569] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3295601 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:05.726 20:47:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.762 20:47:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:06.762 00:34:06.762 real 0m28.411s 00:34:06.762 user 1m2.941s 00:34:06.762 sys 0m7.432s 00:34:06.762 20:47:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:06.762 20:47:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:06.762 ************************************ 00:34:06.762 END TEST nvmf_bdevperf 00:34:06.762 ************************************ 00:34:06.762 20:47:22 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:06.762 20:47:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:06.762 20:47:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:06.762 20:47:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.762 ************************************ 00:34:06.762 START TEST nvmf_target_disconnect 00:34:06.762 ************************************ 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:06.762 * Looking for test storage... 00:34:06.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.762 20:47:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:06.763 20:47:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:14.911 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:14.912 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:14.912 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:14.912 Found net devices under 0000:31:00.0: cvl_0_0 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:14.912 Found net devices under 0000:31:00.1: cvl_0_1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:14.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:34:14.912 00:34:14.912 --- 10.0.0.2 ping statistics --- 00:34:14.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.912 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:34:14.912 00:34:14.912 --- 10.0.0.1 ping statistics --- 00:34:14.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.912 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:14.912 ************************************ 00:34:14.912 START TEST nvmf_target_disconnect_tc1 00:34:14.912 ************************************ 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:34:14.912 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:15.175 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.175 [2024-05-13 20:47:30.930915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.175 [2024-05-13 20:47:30.931329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.175 [2024-05-13 20:47:30.931345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d9520 with addr=10.0.0.2, port=4420 00:34:15.175 [2024-05-13 20:47:30.931375] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:15.175 [2024-05-13 20:47:30.931392] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:15.175 [2024-05-13 20:47:30.931400] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:15.175 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:15.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:15.175 Initializing NVMe Controllers 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:34:15.175 00:34:15.175 real 0m0.107s 00:34:15.175 user 0m0.049s 00:34:15.175 sys 0m0.057s 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.175 ************************************ 00:34:15.175 END TEST nvmf_target_disconnect_tc1 00:34:15.175 ************************************ 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:15.175 20:47:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:15.175 ************************************ 00:34:15.175 START TEST nvmf_target_disconnect_tc2 00:34:15.175 ************************************ 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3302265 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3302265 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3302265 ']' 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:15.175 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.175 [2024-05-13 20:47:31.093058] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:34:15.175 [2024-05-13 20:47:31.093112] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.436 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.436 [2024-05-13 20:47:31.185482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.436 [2024-05-13 20:47:31.279504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.436 [2024-05-13 20:47:31.279565] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.436 [2024-05-13 20:47:31.279574] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.436 [2024-05-13 20:47:31.279581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.436 [2024-05-13 20:47:31.279587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.436 [2024-05-13 20:47:31.279754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:15.436 [2024-05-13 20:47:31.279912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:15.436 [2024-05-13 20:47:31.280074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:15.436 [2024-05-13 20:47:31.280075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.011 Malloc0 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.011 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.273 [2024-05-13 20:47:31.958425] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.273 20:47:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.273 [2024-05-13 20:47:31.998500] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:16.273 [2024-05-13 20:47:31.998835] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=3302297 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:34:16.273 20:47:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.273 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.192 20:47:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 3302265 00:34:18.192 20:47:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Write completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 Read completed with error (sct=0, sc=8) 00:34:18.192 starting I/O failed 00:34:18.192 [2024-05-13 20:47:34.032266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.192 [2024-05-13 20:47:34.032804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.033094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.033107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-05-13 20:47:34.033541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.033969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.033982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-05-13 20:47:34.034285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.034568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.034603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-05-13 20:47:34.034809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.035157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.035168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-05-13 20:47:34.035549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.035963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.192 [2024-05-13 20:47:34.035976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.192 qpair failed and we were unable to recover it. 00:34:18.192 [2024-05-13 20:47:34.036307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.036571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.036582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.036784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.037077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.037087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.037425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.037759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.037768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.037979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.038273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.038283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.038624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.038991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.039001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.039159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.039602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.039613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.039878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.040136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.040146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.040405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.040780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.040790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.041160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.041518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.041527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.041828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.042187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.042197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.042622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.042840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.042850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.043112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.043428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.043438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.043829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.044214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.044223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.044529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.044870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.044880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.045209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.045557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.045567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.045715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.046059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.046068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.046308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.046691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.046700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.047028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.047402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.047412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.047777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.048132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.048140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.048519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.048886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.048895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.049245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.049574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.049583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.049917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.050200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.050209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.050396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.051453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.051475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.051809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.052169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.052179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.052588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.052903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.052912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.053280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.053644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.053654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.054020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.054406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.054417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.054895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.055233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.055241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.193 [2024-05-13 20:47:34.055677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.055924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.193 [2024-05-13 20:47:34.055933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.193 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.056276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.056520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.056529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.056847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.057184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.057193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.057413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.057742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.057751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.058079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.058432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.058441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.058837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.059208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.059217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.059617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.060045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.060053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.060356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.060614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.060623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.060957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.061205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.061213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.061561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.061892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.061904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.062237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.062562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.062572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.062820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.063185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.063194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.063549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.063878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.063887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.064013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.064324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.064334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.064561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.064922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.064930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.065331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.065710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.065719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.066069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.066429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.066438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.066670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.066942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.066951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.067264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.067647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.067658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.067995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.068284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.068296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.068674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.068915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.068924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.069210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.069622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.069631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.069995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.070298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.070306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.070736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.071046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.071054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.071390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.071743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.071753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.071989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.072336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.072625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.072954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.194 [2024-05-13 20:47:34.072962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.194 qpair failed and we were unable to recover it. 00:34:18.194 [2024-05-13 20:47:34.073357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.073703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.073712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.073935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.074177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.074185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.074508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.074866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.074877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.075213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.075556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.075566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.075912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.076251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.076259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.076513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.076865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.076873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.077098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.077359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.077368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.077748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.078163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.078171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.078558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.078896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.078904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.079248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.079571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.079580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.079922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.080260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.080268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.080691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.081033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.081043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.081414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.081751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.081761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.082131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.082520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.082529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.082902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.083224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.083233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.083569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.083932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.083941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.084372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.084713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.084722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.085057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.085426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.085435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.085790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.086081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.086090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.086295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.086659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.086668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.086979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.087309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.087322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.087507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.087871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.087880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.088210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.088607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.088616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.088945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.089274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.089284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.089526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.089791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.089800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.090152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.090553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.090562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.090825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.091176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.091185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.091477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.091732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.091741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.092075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.092428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.092437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.195 qpair failed and we were unable to recover it. 00:34:18.195 [2024-05-13 20:47:34.092782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.195 [2024-05-13 20:47:34.093054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.093064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.093437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.093823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.093832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.094036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.094425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.094435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.094793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.095027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.095037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.095392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.095584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.095594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.095987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.096325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.096334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.096625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.096990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.097000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.097331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.097684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.097694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.098065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.098435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.098445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.098795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.099190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.099199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.099539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.099908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.099916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.100234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.100546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.100555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.100745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.100953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.100963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.101298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.101656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.101667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.101991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.102362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.102372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.102787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.103124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.103133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.103468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.103705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.103714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.104074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.104430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.104440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.104799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.105130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.105139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.105462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.105814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.105823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.106155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.106489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.106499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.106856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.107187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.107195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.107550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.107926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.107934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.108176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.108480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.108489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.108867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.109197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.109206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.109568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.109930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.109939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.110165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.110522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.110531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.110865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.111226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.196 [2024-05-13 20:47:34.111235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.196 qpair failed and we were unable to recover it. 00:34:18.196 [2024-05-13 20:47:34.111493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.111732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.111742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.112114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.112341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.112351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.112700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.113060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.113069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.113417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.113621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.113630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.113946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.114286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.114296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.114533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.114910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.114919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.115288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.115527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.115537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.115909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.116283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.116292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.116639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.116979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.116988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.117346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.117691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.117700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.118026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.118398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.118407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.118753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.119132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.119141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.119556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.119894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.119903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.120274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.120594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.120604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.120899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.121234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.121243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.121654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.121994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.122002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.122189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.122566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.122576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.122819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.123157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.123166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.123497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.123832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.123841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.124243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.124475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.124485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.124871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.125190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.125199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.125562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.125908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.125917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.197 [2024-05-13 20:47:34.126286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.126619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.197 [2024-05-13 20:47:34.126629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.197 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.126826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.127137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.127147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.127415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.127750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.127760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.128036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.128396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.128416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.128572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.128899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.128908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.129244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.129603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.129612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.130387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.130685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.130695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.130946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.131295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.131304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.131636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.131995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.198 [2024-05-13 20:47:34.132003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.198 qpair failed and we were unable to recover it. 00:34:18.198 [2024-05-13 20:47:34.132349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.466 [2024-05-13 20:47:34.132689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.132700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.133004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.133360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.133383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.133657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.133996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.134007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.134430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.134805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.134814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.135162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.135507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.135516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.135743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.136067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.136076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.136456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.136793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.136802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.137152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.137487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.137496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.137905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.137994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.138004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.138354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.138703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.138711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.139047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.139343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.139356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.139692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.140033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.140042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.140409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.140755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.140764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.140954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.141348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.141357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.141721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.142018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.142026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.142372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.142743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.142752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.143087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.143280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.143289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.143685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.144053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.144062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.144400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.144734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.144743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.144976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.145345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.145354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.145730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.146108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.146117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.146471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.146813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.146821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.147010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.147238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.147247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.147567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.147934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.147943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.148271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.148620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.148630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.148844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.149188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.149197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.149584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.149850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.149859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.150181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.150533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.150543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.150791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.151132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.467 [2024-05-13 20:47:34.151141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.467 qpair failed and we were unable to recover it. 00:34:18.467 [2024-05-13 20:47:34.151387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.151788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.151797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.152164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.152496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.152506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.152863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.153195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.153204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.153545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.153922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.153931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.154255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.154502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.154511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.154847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.155181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.155190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.155550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.155922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.155931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.156329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.156678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.156687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.157035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.157420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.157429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.157809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.158179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.158188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.158525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.158882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.158891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.159236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.159546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.159555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.159881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.160242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.160251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.160597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.160970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.160979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.161312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.161596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.161605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.161967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.162303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.162317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.162647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.162976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.162987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.163326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.163668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.163678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.164046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.164378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.164387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.164742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.165099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.165108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.165440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.165819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.165828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.166153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.166490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.166500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.166872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.167248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.167256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.167656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.167947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.167956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.168327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.168675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.168684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.168928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.169159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.169168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.169520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.169862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.169873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.170274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.170586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.170595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.468 [2024-05-13 20:47:34.170943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.171322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.468 [2024-05-13 20:47:34.171332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.468 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.171730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.172055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.172064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.172434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.172789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.172798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.173142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.173453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.173463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.173838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.174174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.174183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.174527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.174881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.174890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.175073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.175436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.175446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.175804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.176178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.176186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.176572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.176926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.176937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.177322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.177673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.177682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.178051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.178528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.178562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.178897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.179253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.179263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.179723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.180078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.180087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.180332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.180683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.180693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.181022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.181521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.181556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.181933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.182290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.182299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.182661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.183045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.183054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.183300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.183654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.183664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.184043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.184273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.184285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.184617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.184871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.184880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.185250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.185567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.185576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.186005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.186245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.186255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.187090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.187429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.187440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.187782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.188144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.188153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.188490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.188818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.188827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.189155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.189500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.189511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.189892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.190222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.190231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.190624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.190996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.191006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.191344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.191737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.191747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.192112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.192407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.192417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.469 qpair failed and we were unable to recover it. 00:34:18.469 [2024-05-13 20:47:34.192786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.469 [2024-05-13 20:47:34.193166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.193174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.193496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.193837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.193846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.194214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.194539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.194548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.194882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.195222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.195232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.195633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.196013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.196023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.196354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.196607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.196616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.196993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.197240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.197250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.197620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.197981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.197990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.198329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.198700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.198709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.199107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.199366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.199375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.199579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.199897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.199906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.200222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.200570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.200579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.200904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.201246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.201255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.201586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.201918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.201927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.202131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.202369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.202379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.202597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.202838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.202855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.203205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.203618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.203627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.203916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.204240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.204249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.204574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.204900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.204910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.205232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.205396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.205406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.205751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.205911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.205921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.206326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.206667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.206677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.206908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.207221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.207230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.207618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.207934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.207943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.208356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.208597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.208606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.208957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.209321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.209330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.209727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.210029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.210037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.210373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.210738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.210747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.211069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.211413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.211423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.211761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.212097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.470 [2024-05-13 20:47:34.212106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.470 qpair failed and we were unable to recover it. 00:34:18.470 [2024-05-13 20:47:34.212514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.212646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.212655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.213029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.213364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.213374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.213493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.213809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.213818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.214162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.214507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.214517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.214785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.214979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.214990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.215386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.215789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.215798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.216093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.216429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.216438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.216787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.217105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.217114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.217486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.217833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.217841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.218252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.218422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.218432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.218863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.219204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.219213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.219586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.219884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.219893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.220240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.220592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.220602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.220903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.221277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.221286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.221703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.221919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.221928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.222368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.222627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.222636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.223027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.223225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.223235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.223497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.223855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.223864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.224284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.224555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.224564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.224791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.225086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.225095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.225434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.225730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.225739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.226106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.226389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.226398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.226754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.227079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.227088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.227433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.227800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-05-13 20:47:34.227809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.471 qpair failed and we were unable to recover it. 00:34:18.471 [2024-05-13 20:47:34.228141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.228477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.228486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.228851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.229219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.229228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.229619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.229859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.229867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.230210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.230633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.230643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.231050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.231288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.231298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.231673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.231873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.231883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.232137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.232408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.232418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.232797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.233155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.233165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.233425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.233696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.233705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.234125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.234423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.234433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.234810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.235130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.235139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.235498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.235881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.235889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.236279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.236473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.236482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.236843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.237169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.237177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.237417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.237686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.237695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.238064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.238403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.238412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.238769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.239113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.239121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.239509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.239766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.239776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.240188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.240544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.240553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.240923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.241259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.241267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.241616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.241965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.241974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.242200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.242458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.242468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.242934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.243335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.243344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.243700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.244075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.244085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.244462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.244802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.244811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.245049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.245339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.245355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.245647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.245982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.245990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.246362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.246620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.246629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.246936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.247284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.247292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.472 qpair failed and we were unable to recover it. 00:34:18.472 [2024-05-13 20:47:34.247641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.247849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-05-13 20:47:34.247858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.248204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.248524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.248533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.248885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.249239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.249247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.249658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.249970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.249978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.250272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.250615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.250624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.250951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.251327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.251336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.251676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.252036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.252045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.252418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.252769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.252778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.252840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.253165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.253174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.253523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.253716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.253726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.254138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.254485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.254494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.254918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.255223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.255233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.255573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.255925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.255934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.256292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.256569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.256578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.256902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.257264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.257274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.257634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.257966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.257976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.258341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.258688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.258698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.259045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.259340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.259349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.259760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.260108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.260117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.260334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.260699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.260708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.260897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.261228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.261236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.261575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.261927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.261936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.262263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.262604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.262614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.262984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.263362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.263371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.263785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.264110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.264119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.264420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.264765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.264774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.265105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.265437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.265447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.265699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.266033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.266043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.266295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.266628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.266638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.267025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.267350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.267359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.473 qpair failed and we were unable to recover it. 00:34:18.473 [2024-05-13 20:47:34.267662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-05-13 20:47:34.268015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.268024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.268350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.268692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.268701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.269080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.269380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.269390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.269753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.270085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.270094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.270424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.270642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.270651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.271030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.271395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.271404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.271752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.271933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.271945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.272285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.272609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.272619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.273000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.273407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.273417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.273763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.274140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.274149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.274444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.274793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.274802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.275200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.275543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.275552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.275896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.276266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.276275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.276615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.276971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.276979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.277203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.277459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.277468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.277770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.278099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.278108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.278377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.278758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.278769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.279011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.279389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.279398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.279734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.280094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.280102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.280438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.280816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.280826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.281134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.281461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.281471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.281813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.282167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.282175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.282507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.282874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.282889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.283231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.283469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.283479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.283840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.284222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.284233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.284576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.284960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.284969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.285031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.285386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.285399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.474 qpair failed and we were unable to recover it. 00:34:18.474 [2024-05-13 20:47:34.285746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.286120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-05-13 20:47:34.286130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.286462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.286803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.286812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.287182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.287524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.287534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.287904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.288282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.288291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.288546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.288840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.288849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.289070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.289431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.289440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.289862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.290200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.290208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.290569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.290794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.290804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.291047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.291414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.291423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.291775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.292157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.292168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.292516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.292729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.292737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.293058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.293394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.293404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.293802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.294048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.294057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.294260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.294569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.294578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.294917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.295132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.295141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.295486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.295827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.295836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.296211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.296567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.296576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.296901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.297270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.297279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.297626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.297926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.297935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.298267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.298648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.298658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.299008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.299351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.299360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.299684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.300030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.300039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.300428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.300775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.300785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.301145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.301516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.301525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.301852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.302217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.302225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.302563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.302786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.302795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.303000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.303338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.303347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.475 qpair failed and we were unable to recover it. 00:34:18.475 [2024-05-13 20:47:34.303792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.304132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.475 [2024-05-13 20:47:34.304141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.304488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.304841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.304851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.305211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.305404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.305414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.305751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.306069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.306078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.306233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.306550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.306561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.306808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.307144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.307154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.307495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.307700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.307709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.308027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.308401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.308410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.308746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.309082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.309092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.309468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.309794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.309803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.310044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.310394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.310403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.310745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.311127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.311137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.311496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.311861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.311872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.312231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.312920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.312940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.313147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.313513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.313523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.313875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.314248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.314257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.314513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.314861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.314870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.315208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.315444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.315453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.315826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.316159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.316167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.316358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.316732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.316741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.317114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.317405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.317414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.317762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.318095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.318104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.318351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.318695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.318705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.319105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.319434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.319444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.319780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.320012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.320021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.320373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.320696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.320705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.320996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.321334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.321343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.321734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.322100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.322110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.476 qpair failed and we were unable to recover it. 00:34:18.476 [2024-05-13 20:47:34.322466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.476 [2024-05-13 20:47:34.322813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.322822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.323162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.323462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.323471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.323834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.324095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.324104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.324306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.324606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.324615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.324953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.325149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.325159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.325563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.325926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.325935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.326159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.326446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.326456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.326809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.327171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.327180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.327522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.327905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.327916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.328261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.328638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.328648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.328992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.329366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.329375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.329770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.329991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.330000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.330225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.330581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.330590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.330908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.331168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.331177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.331411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.331746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.331755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.332102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.332458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.332468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.332807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.333061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.333070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.333429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.333788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.333797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.334046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.334310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.334324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.334655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.335028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.335036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.335441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.335673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.335682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.335941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.336349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.336367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.336735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.336985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.336994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.337249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.337605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.337615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.337833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.338196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.338205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.338575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.338921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.338930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.339328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.339663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.339672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.340010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.340213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.340223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.340470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.340840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.340848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.477 qpair failed and we were unable to recover it. 00:34:18.477 [2024-05-13 20:47:34.341039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.341413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.477 [2024-05-13 20:47:34.341423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.341853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.342189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.342197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.342375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.342832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.342842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.343184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.343457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.343466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.343751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.344126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.344135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.344567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.344895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.344903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.345279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.345497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.345507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.345846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.346192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.346202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.346481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.346827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.346836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.347147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.347344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.347355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.347693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.347889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.347899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.348247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.348641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.348650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.348979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.349345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.349355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.349584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.349928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.349937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.350291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.350663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.350672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.351002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.351388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.351397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.351748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.352067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.352076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.352280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.352672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.352682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.353030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.353351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.353361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.353742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.354084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.354093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.354423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.354797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.354807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.355060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.355416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.355426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.355623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.355943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.355952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.356415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.356741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.356751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.357128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.357503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.357512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.357853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.358225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.358234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.358351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.358691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.358702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.358884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.359126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.359135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.359397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.359618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.359627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.360010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.360348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.360357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.478 qpair failed and we were unable to recover it. 00:34:18.478 [2024-05-13 20:47:34.360703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.478 [2024-05-13 20:47:34.360947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.360955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.361341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.361708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.361717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.362047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.362398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.362408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.362767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.363102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.363110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.363348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.363638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.363647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.364060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.364410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.364420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.364755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.365102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.365111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.365340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.365693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.365702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.366030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.366417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.366426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.366802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.367159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.367167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.367466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.367817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.367826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.368155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.368452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.368461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.368810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.369143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.369152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.369355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.369667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.369676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.370006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.370350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.370360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.370753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.371106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.371114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.371442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.371796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.371805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.372135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.372492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.372501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.372895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.373233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.373242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.373534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.373895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.373903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.374238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.374649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.374659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.375108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.375451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.375461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.375817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.376193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.376202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.376553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.376850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.376859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.377190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.378171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.378193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.378537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.378908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.378917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.379277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.379594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.379606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.380007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.380346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.380356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.380720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.381055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.381065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.381433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.381782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.479 [2024-05-13 20:47:34.381791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.479 qpair failed and we were unable to recover it. 00:34:18.479 [2024-05-13 20:47:34.382158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.382530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.382539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.382898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.383254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.383262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.383617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.383974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.383983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.384310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.384663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.384673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.385015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.385359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.385368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.385608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.385975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.385984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.386316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.386657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.386676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.387016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.387433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.387443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.387767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.388109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.388117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.388386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.388725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.388734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.389067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.389425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.389434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.389811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.390148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.390156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.390560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.390912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.390922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.391226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.391574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.391584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.391913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.392278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.392286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.392640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.393015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.393024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.393353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.393696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.393707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.394104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.394423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.394432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.394776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.395006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.395015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.395356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.395582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.395592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.395939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.396802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.396820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.397053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.397945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.397963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.398181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.398561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.398570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.398973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.399266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.399274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.480 qpair failed and we were unable to recover it. 00:34:18.480 [2024-05-13 20:47:34.399636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.480 [2024-05-13 20:47:34.400009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.400018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.481 qpair failed and we were unable to recover it. 00:34:18.481 [2024-05-13 20:47:34.400355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.400663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.400672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.481 qpair failed and we were unable to recover it. 00:34:18.481 [2024-05-13 20:47:34.401043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.401288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.401298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.481 qpair failed and we were unable to recover it. 00:34:18.481 [2024-05-13 20:47:34.401573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.401911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.481 [2024-05-13 20:47:34.401920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.481 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.402290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.402512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.402524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.402862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.403194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.403203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.403528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.403849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.403857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.404148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.404496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.404506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.404888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.405100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.405109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.405501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.405836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.405845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.750 [2024-05-13 20:47:34.406218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.406543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.750 [2024-05-13 20:47:34.406553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.750 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.406917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.407252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.407261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.407598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.407954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.407962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.408311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.408622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.408631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.408983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.409355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.409365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.409711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.410084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.410092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.410458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.410813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.410821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.411176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.411550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.411560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.411899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.412272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.412280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.412612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.412969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.412978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.413216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.413528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.413536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.413874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.414176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.414185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.414383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.414714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.414723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.415067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.415433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.415443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.415794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.416174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.416184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.416616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.416914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.416922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.417250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.417551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.417561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.417897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.418089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.418098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.418462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.418786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.418795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.419145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.419311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.419325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.419547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.419815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.419823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.420159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.420410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.420419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.420780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.421008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.421017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.421361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.421736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.421745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.422077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.422246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.422257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.422513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.422887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.422896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.423239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.423593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.423602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.424022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.424355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.424365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.424758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.425112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.425121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.425461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.425801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.425809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.751 qpair failed and we were unable to recover it. 00:34:18.751 [2024-05-13 20:47:34.426142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.751 [2024-05-13 20:47:34.426500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.426510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.426728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.427038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.427058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.427451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.427871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.427883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.428213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.428550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.428560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.428895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.429231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.429239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.429597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.429966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.429975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.430254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.430664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.430673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.431002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.431331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.431383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.431630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.431928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.431937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.432290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.432689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.432698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.432947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.433284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.433293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.433646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.433879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.433889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.434240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.434652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.434662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.435022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.435352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.435361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.435662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.436015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.436023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.436213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.436583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.436593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.436780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.437120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.437129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.437386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.437620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.437629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.437971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.438319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.438330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.438666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.439032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.439041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.439372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.439592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.439601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.439792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.440086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.440095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.440387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.440658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.440666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.441001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.441363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.441374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.441721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.442003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.442012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.442395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.442662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.442671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.442778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.443120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.443129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.443328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.443776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.443785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.444166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.444556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.444566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.444723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.445049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.752 [2024-05-13 20:47:34.445058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.752 qpair failed and we were unable to recover it. 00:34:18.752 [2024-05-13 20:47:34.445397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.445747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.445756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.446058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.446406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.446415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.446760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.446969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.446977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.447354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.447704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.447713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.448046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.448354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.448364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.448750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.449082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.449091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.449424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.449739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.449748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.450096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.450438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.450448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.450512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.450859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.450868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.451186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.451510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.451519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.451752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.452092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.452101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.452519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.452604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.452613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.452959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.453224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.453233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.453616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.453832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.453841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.454211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.454585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.454594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.454978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.455287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.455296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.455705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.455874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.455882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.456345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.456703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.456711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.457105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.457462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.457471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.458142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.458150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.458511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.458871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.458879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.459245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.459592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.459602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.459946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.460298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.460306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.460721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.461081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.461090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.461279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.461620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.461629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.461923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.462277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.462286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.462694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.462992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.463001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.463349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.463699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.463708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.464036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.464396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.464406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.753 qpair failed and we were unable to recover it. 00:34:18.753 [2024-05-13 20:47:34.464648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.753 [2024-05-13 20:47:34.465008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.465017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.465332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.465690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.465699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.466038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.466389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.466399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.466696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.467035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.467043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.467390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.467742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.467757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.468181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.468439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.468449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.468817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.469162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.469171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.469540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.469928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.469937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.470255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.470616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.470625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.470954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.471224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.471233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.471659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.471991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.471999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.472338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.472659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.472667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.473015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.473374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.473383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.473750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.474105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.474114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.474399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.474762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.474771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.475171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.475647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.475656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.475985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.476350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.476359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.476751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.477075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.477084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.477454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.477651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.477661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.477923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.478149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.478158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.478390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.478743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.478753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.479121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.479482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.479491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.479851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.480044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.480053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.480294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.480545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.480554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.480883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.481154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.481165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.481498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.481803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.481812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.482141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.482514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.482523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.482863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.483202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.483212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.483563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.483657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.483666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.484043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.484326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.754 [2024-05-13 20:47:34.484335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.754 qpair failed and we were unable to recover it. 00:34:18.754 [2024-05-13 20:47:34.484623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.485006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.485015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.485348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.485709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.485717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.486046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.486246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.486255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.486600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.486944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.486953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.487303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.487688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.487700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.487927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.488326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.488335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.488693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.488958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.488966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.489383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.489711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.489720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.490091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.490365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.490375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.490737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.491072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.491080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.491264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.491505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.491514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.491856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.492231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.492239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.492294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.492445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.492455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.492782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.493119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.493127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.493491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.493837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.493850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.494146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.494596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.494606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.494990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.495347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.495357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.495660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.496002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.496011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.496345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.496699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.496707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.497038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.497409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.497419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.497831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.498143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.498152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.498439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.498640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.498651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.499009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.499260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.499270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.755 [2024-05-13 20:47:34.499538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.499881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.755 [2024-05-13 20:47:34.499890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.755 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.500201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.500487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.500500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.500855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.501166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.501174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.501584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.501960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.501969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.502264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.502601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.502610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.502816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.503153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.503161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.503466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.503786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.503795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.504131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.504462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.504472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.504717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.504903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.504912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.505305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.505492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.505502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.505819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.506051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.506059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.506517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.506891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.506900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.507266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.507668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.507678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.508039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.508404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.508413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.508769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.509123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.509131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.509376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.509833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.509841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.510201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.510414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.510423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.510659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.511000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.511008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.511423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.511752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.511761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.512117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.512465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.512474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.512837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.513166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.513175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.513463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.513771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.513780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.514143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.514468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.514477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.514852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.515227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.515236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.515715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.516058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.516067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.516303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.516523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.516533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.516892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.517197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.517205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.517398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.517781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.517790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.517993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.518330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.518340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.518685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.519016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.519025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.756 qpair failed and we were unable to recover it. 00:34:18.756 [2024-05-13 20:47:34.519381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.756 [2024-05-13 20:47:34.519719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.519728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.520074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.520396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.520406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.520654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.520885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.520894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.521200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.521401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.521411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.521664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.521974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.521991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.522346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.522688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.522697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.523024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.523360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.523369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.523736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.524086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.524095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.524515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.524839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.524848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.525151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.525506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.525515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.525849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.526038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.526048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.526398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.526778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.526786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.527054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.527439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.527448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.527858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.528120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.528129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.528379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.528690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.528700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.529066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.529407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.529416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.529770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.530145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.530154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.530422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.530764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.530772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.531021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.531368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.531377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.531679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.532040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.532049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.532376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.532686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.532695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.533052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.533394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.533403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.533793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.534146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.534156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.534493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.534841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.534850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.535037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.535426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.535435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.535647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.535959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.535968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.536209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.536533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.536542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.536783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.537120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.537129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.537374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.537587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.537596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.537937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.538300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.757 [2024-05-13 20:47:34.538309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.757 qpair failed and we were unable to recover it. 00:34:18.757 [2024-05-13 20:47:34.538718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.539060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.539069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.539446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.539827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.539836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.540174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.540543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.540553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.540870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.541236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.541245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.541463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.541674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.541684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.542038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.542375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.542384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.542739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.543105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.543114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.543492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.543721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.543730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.544090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.544343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.544352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.544698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.545035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.545043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.545399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.545756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.545765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.546169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.546486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.546495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.546841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.547183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.547192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.547533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.547881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.547889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.548063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.548267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.548275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.548610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.548918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.548927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.549260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.549599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.549609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.549825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.550181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.550190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.550546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.550918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.550927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.551217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.551571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.551580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.551930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.552017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.552027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.552412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.552755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.552764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.553114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.553498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.553508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.553876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.554079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.554088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.554414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.554679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.554687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.555017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.555369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.555379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.555637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.555972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.555980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.556224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.556572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.556581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.556778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.557146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.557155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.758 [2024-05-13 20:47:34.557356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.557558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.758 [2024-05-13 20:47:34.557568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.758 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.557812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.558150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.558159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.558507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.558842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.558851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.559200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.559554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.559565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.559939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.560327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.560337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.560728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.560906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.560915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.561156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.561427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.561436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.561640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.561996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.562005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.562229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.562470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.562479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.562810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.563049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.563057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.563283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.563523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.563532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.563914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.564302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.564311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.564639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.564989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.564998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.565339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.565586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.565595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.565848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.566213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.566222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.566571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.566909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.566917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.567248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.567627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.567636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.568003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.568273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.568283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.568680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.568894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.568903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.569156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.569448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.569457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.569679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.570065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.570073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.570325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.570566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.570575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.570813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.571104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.571112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.571473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.571851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.571860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.572231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.572578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.572588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.759 qpair failed and we were unable to recover it. 00:34:18.759 [2024-05-13 20:47:34.572915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.573283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.759 [2024-05-13 20:47:34.573292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.573632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.573962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.573970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.574298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.574599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.574608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.574945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.575274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.575282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.575653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.576037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.576046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.576417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.576770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.576779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.577115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.577440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.577449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.577808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.577923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.577931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.578201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.578566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.578575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.578951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.579324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.579333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.579658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.580003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.580011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.580156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.580502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.580511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.580580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.580922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.580930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.581264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.581595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.581604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.581835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.582071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.582080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.582423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.582777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.582786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.583091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.583455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.583464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.583796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.584143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.584152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.584481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.584724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.584734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.585077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.585450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.585460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.585817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.586149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.586158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.586490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.586716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.586726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.587085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.587451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.587461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.587662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.587978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.587987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.588347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.588713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.588722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.589073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.589339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.589349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.589702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.590038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.590046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.590338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.590538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.590547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.590853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.591249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.591262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.591489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.591815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.591825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.760 qpair failed and we were unable to recover it. 00:34:18.760 [2024-05-13 20:47:34.592097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.760 [2024-05-13 20:47:34.592468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.592478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.592801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.593145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.593154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.593489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.593796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.593804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.594123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.594511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.594520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.594769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.595069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.595078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.595404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.595728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.595737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.596070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.596422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.596431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.596777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.597112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.597121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.597477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.597697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.597709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.598050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.598385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.598395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.598728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.599102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.599111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.599453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.599808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.599817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.600060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.600407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.600416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.600776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.601125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.601134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.601470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.601845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.601853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.602186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.602437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.602446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.602807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.603049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.603058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.603429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.603794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.603803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.604153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.604491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.604503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.604876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.605275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.605283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.605577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.605970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.605980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.606130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.606468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.606478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.606723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.607059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.607068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.607448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.607787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.607797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.607992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.608350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.608360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.608702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.609034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.609042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.609371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.609685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.609694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.610066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.610410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.610420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.610778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.611114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.611122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.761 [2024-05-13 20:47:34.611504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.611842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.761 [2024-05-13 20:47:34.611851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.761 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.612200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.612564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.612573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.612904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.613262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.613271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.613436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.613798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.613807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.614141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.614480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.614489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.614669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.615002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.615012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.615376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.615725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.615734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.616083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.616464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.616473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.616885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.617223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.617232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.617577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.617908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.617917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.618289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.618486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.618496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.618842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.619174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.619182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.619606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.619809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.619820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.620186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.620530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.620539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.620871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.621101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.621110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.621452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.621830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.621839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.622117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.622442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.622451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.622796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.623165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.623175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.623546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.623878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.623887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.624293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.624578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.624587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.624943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.625236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.625244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.625611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.625960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.625969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.626300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.626378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.626387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.626729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.627080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.627088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.627422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.627802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.627810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.628137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.628452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.628461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.628840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.629176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.629185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.629609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.629939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.629948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.630319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.630521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.630530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.630857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.631210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.631219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.762 qpair failed and we were unable to recover it. 00:34:18.762 [2024-05-13 20:47:34.631475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.631694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.762 [2024-05-13 20:47:34.631704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.632105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.632433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.632442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.632791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.632993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.633002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.633345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.633689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.633698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.634027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.634358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.634367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.634764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.635097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.635106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.635437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.635813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.635822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.636198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.636571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.636580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.636915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.637277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.637286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.637646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.638006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.638015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.638760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.638769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.639111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.639439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.639448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.639829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.640195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.640203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.640568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.640915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.640924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.641294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.641638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.641647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.641856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.642208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.642216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.642612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.642972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.642981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.643360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.643681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.643690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.644022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.644308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.644322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.644576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.644908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.644917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.645324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.645605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.645615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.646006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.646344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.646353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.646598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.646933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.646942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.647271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.647516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.647525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.647795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.648150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.648159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.648521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.648782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.648791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.649127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.649490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.649500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.649848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.650058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.650068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.650404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.650741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.650750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.651097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.651435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.651444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.763 qpair failed and we were unable to recover it. 00:34:18.763 [2024-05-13 20:47:34.651865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.763 [2024-05-13 20:47:34.652154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.652163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.652519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.652891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.652899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.653226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.653555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.653564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.653918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.654255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.654263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.654601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.654940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.654948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.655271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.655478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.655488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.655832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.656058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.656067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.656393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.656726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.656734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.657065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.657249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.657258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.657626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.658008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.658017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.658386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.658739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.658748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.659110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.659369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.659379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.659699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.659845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.659855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.660233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.660564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.660573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.660901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.661278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.661286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.661690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.661977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.661987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.662353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.662583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.662592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.662954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.663290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.663299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.663699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.664055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.664064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.664279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.664594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.664604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.664933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.665269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.665279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.665636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.665969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.665979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.666350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.666544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.666554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.666839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.667176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.667185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.667551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.667888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.667897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.764 qpair failed and we were unable to recover it. 00:34:18.764 [2024-05-13 20:47:34.668106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.764 [2024-05-13 20:47:34.668465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.668475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.668816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.669004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.669014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.669343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.669666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.669676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.670044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.670287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.670296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.670626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.670913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.670922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.671128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.671521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.671531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.671834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.672211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.672220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.672563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.672901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.672910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.673256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.673524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.673533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.673865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.674081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.674091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.674462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.674858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.674866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.675093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.675327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.675337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.675692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.676027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.676035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.676401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.676766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.676774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.676950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.677279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.677289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.677656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.678045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.678054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.678267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.678633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.678643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.679044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.679284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.679293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.679640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.679852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.679862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.680189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.680459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.680469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.680815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.681199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.681208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.681563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.681900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.681909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.682253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.682617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.682626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.682972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.683324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.683333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.683689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.684065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.684074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.684406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.684784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.684793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:18.765 [2024-05-13 20:47:34.685117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.685460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.765 [2024-05-13 20:47:34.685469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:18.765 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.685823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.686075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.686085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.686424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.686795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.686804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.687253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.687542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.687551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.687901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.688246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.688255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.688599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.688951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.688960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.689306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.689630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.689639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.690008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.690356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.690365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.690694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.691062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.691071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.691401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.691744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.691758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.692087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.692415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.692425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.692881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.693209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.693219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.693584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.693795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.693804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.694132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.694482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.694491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.694819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.695132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.695141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.695372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.695671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.695680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.696043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.696260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.696269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.696595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.696936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.696945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.697288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.697644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.697654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.697902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.698244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.698255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.698696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.699028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.699037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.699410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.699772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.699781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.700024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.700328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.700338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.700676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.701011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.701020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.701362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.701590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.701599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.701965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.702295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.702303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.702604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.702966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.702974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.703341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.703654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.703662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.036 [2024-05-13 20:47:34.703869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.704075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.036 [2024-05-13 20:47:34.704083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.036 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.704405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.704738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.704748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.705102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.705392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.705402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.705665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.706004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.706014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.706359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.706702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.706711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.707080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.707464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.707473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.707710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.708094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.708103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.708439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.708784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.708793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.709139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.709360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.709370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.709684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.710061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.710070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.710421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.710830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.710840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.711099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.711367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.711377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.711787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.712144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.712154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.712526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.712771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.712780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.713120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.713505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.713515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.713883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.714262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.714272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.714605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.714860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.714870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.715223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.715570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.715579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.715948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.716279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.716288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.716617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.716967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.716975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.717304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.717670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.717679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.717868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.718241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.718250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.718605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.718959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.718968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.719303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.719650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.719659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.720019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.720230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.720239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.720582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.720876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.720885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.721233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.721573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.721582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.721996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.722231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.722239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.722542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.722775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.722784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.723154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.723496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.723505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.037 qpair failed and we were unable to recover it. 00:34:19.037 [2024-05-13 20:47:34.723858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.037 [2024-05-13 20:47:34.724204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.724212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.724605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.724961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.724971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.725232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.725455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.725465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.725767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.726106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.726115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.726464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.726828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.726836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.727095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.727456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.727465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.727805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.728176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.728184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.728522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.728859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.728868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.729267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.729614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.729623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.729968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.730338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.730347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.730669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.730931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.730940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.731265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.731634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.731643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.731996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.732379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.732389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.732752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.733071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.733080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.733273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.733604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.733613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.733980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.734321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.734331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.734675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.734905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.734913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.735250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.735554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.735563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.735934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.736280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.736289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.736615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.736985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.736994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.737324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.737540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.737548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.737971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.738305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.738316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.738695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.739050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.739058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.739405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.739624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.739634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.739971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.740340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.740349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.740785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.741119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.741128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.741289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.741649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.741658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.741984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.742324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.742334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.742611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.742994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.743002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.743245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.743555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.038 [2024-05-13 20:47:34.743564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.038 qpair failed and we were unable to recover it. 00:34:19.038 [2024-05-13 20:47:34.743888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.744221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.744229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.744566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.744893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.744902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.745245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.745589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.745598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.745926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.746116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.746126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.746485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.746845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.746854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.747188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.747377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.747387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.747815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.748148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.748156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.748500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.748851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.748860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.749062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.749418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.749428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.749516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.749858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.749867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.750216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.750561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.750570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.750915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.751224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.751233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.751643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.751925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.751934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.752272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.752691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.752700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.753022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.753399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.753408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.753743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.754079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.754087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.754431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.754643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.754652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.755020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.755359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.755369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.755678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.755935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.755944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.756286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.756553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.756562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.756917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.757269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.757278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.757643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.758001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.758009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.758226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.758549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.758558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.758849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.759193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.759203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.759577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.759920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.759929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.760293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.760644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.760654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.761005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.761372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.761381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.761712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.762070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.762079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.762410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.762752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.762761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.039 qpair failed and we were unable to recover it. 00:34:19.039 [2024-05-13 20:47:34.763080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.763397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.039 [2024-05-13 20:47:34.763406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.763776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.763990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.763999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.764200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.764540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.764551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.764903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.765118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.765128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.765498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.765830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.765839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.766193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.766539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.766548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.766868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.767205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.767214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.767446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.767793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.767802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.768123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.768464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.768473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.768799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.769086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.769094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.769421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.769790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.769798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.770121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.770445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.770454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.770807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.771162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.771171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.771496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.771852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.771861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.772174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.772545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.772554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.772883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.773144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.773153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.773482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.773798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.773807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.774165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.774445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.774453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.774782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.775093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.775102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.775426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.775734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.775742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.776072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.776418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.776428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.776634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.776887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.776896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.777226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.777571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.777580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.040 qpair failed and we were unable to recover it. 00:34:19.040 [2024-05-13 20:47:34.777882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.040 [2024-05-13 20:47:34.778258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.778267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.778678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.778861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.778871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.779200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.779475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.779485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.779850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.780182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.780191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.780569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.780941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.780951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.781256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.781508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.781517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.781850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.782182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.782191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.782531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.782843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.782852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.783199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.783553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.783562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.783799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.784044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.784054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.784406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.784742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.784751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.785092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.785460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.785470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.785861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.786072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.786081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.786456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.786829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.786837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.787168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.787547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.787556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.787888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.788239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.788248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.788584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.788946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.788955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.789176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.789526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.789535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.789882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.790244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.790254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.790600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.790968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.790977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.791141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.791404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.791417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.791809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.792167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.792175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.792357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.792671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.792680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.793009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.793390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.793400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.793740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.793959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.793968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.794302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.794649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.794659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.795005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.795366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.795375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.795716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.796088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.796097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.796280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.796509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.796521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.796838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.797133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.041 [2024-05-13 20:47:34.797143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.041 qpair failed and we were unable to recover it. 00:34:19.041 [2024-05-13 20:47:34.797504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.797840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.797851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.798190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.798470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.798479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.798825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.799049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.799059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.799388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.799730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.799739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.800063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.800426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.800435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.800770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.801103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.801111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.801467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.801814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.801822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.802112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.802485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.802494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.802840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.803208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.803217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.803568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.803901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.803910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.804213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.804567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.804578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.804911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.805267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.805277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.805629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.805961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.805970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.806305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.806641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.806651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.807001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.807189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.807198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.807449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.807682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.807691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.808052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.808387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.808396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.808741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.809062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.809071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.809401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.809740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.809749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.810078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.810423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.810433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.810795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.811123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.811134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.811476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.811811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.811819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.812179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.812533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.812542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.812875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.813267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.813276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.813617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.813976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.813985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.814386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.814688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.814702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.815093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.815428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.815437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.815804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.816090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.816099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.816452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.816768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.816777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.042 qpair failed and we were unable to recover it. 00:34:19.042 [2024-05-13 20:47:34.817114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.817456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-05-13 20:47:34.817465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.817655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.817980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.817989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.818320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.818663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.818671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.818996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.819361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.819370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.819747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.819959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.819968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.820320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.820669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.820677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.821025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.821379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.821388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.821800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.822130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.822139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.822541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.822880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.822889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.823241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.823586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.823596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.823923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.824212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.824220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.824587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.824920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.824930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.825301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.825667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.825677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.826001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.826357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.826366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.826721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.827090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.827098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.827425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.827769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.827778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.828004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.828363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.828372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.828684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.829036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.829045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.829321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.829699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.829708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.829996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.830310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.830323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.830661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.830831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.830840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.831045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.831422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.831431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.831758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.832120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.832129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.832446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.832785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.832794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.833079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.833419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.833428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.833759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.834131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.834141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.834505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.834828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.834837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.835031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.835346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.835356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.835703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.836038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.836047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.836388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.836737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.836746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.043 qpair failed and we were unable to recover it. 00:34:19.043 [2024-05-13 20:47:34.837075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-05-13 20:47:34.837436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.837446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.837798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.838009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.838019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.838397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.838792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.838801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.839083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.839321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.839331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.839720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.840075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.840084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.840434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.840774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.840783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.841147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.841522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.841532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.841876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.842072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.842082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.842451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.842797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.842806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.843215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.843565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.843574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.843908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.844142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.844150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.844440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.844668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.844676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.844955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.845322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.845331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.845560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.845793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.845801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.846184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.846537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.846546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.846873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.847239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.847248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.847586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.847913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.847923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.848273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.848635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.848645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.848923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.849303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.849315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.849694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.850033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.850041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.850385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.850554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.850564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.850930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.851291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.851299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.851607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.851969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.851978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.852264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.852598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.852607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.852889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.853213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.853222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.853565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.853921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.853930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.044 qpair failed and we were unable to recover it. 00:34:19.044 [2024-05-13 20:47:34.854303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.854644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-05-13 20:47:34.854653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.854991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.855326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.855335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.855533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.855852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.855862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.856116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.856476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.856486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.856849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.857191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.857200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.857527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.857905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.857914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.858241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.858577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.858587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.858941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.859296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.859305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.859637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.859991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.860000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.860359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.860640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.860649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.860975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.861338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.861348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.861610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.861831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.861840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.862203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.862455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.862464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.862821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.863195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.863205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.863549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.863918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.863928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.864271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.864476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.864486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.864842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.865170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.865180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.865535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.865867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.865876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.866252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.866698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.866707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.867001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.867369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.867378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.867752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.867936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.867945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.868273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.868623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.868634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.869000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.869344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.869353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.869723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.869979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.869988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.870320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.870653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.870663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.871011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.871360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.871369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.871692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.872091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.872100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.872444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.872748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.872757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.873091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.873430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.873439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.045 qpair failed and we were unable to recover it. 00:34:19.045 [2024-05-13 20:47:34.873739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.045 [2024-05-13 20:47:34.874077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.874086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.874463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.874841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.874849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.875166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.875419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.875428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.875811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.876158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.876167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.876460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.876795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.876803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.877178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.877548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.877557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.877744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.878011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.878020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.878334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.878691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.878700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.879065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.879444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.879454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.879824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.880045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.880055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.880408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.880746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.880756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.881129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.881441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.881451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.881807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.882031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.882040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.882407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.882567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.882577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.882939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.883320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.883330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.883707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.883910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.883920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.884277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.884653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.884663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.884996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.885379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.885389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.885640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.885807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.885817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.886179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.886516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.886526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.886874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.887254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.887264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.887556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.887916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.887926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.888273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.888645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.888655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.888986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.889364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.889374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.889726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.890104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.890114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.890464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.890807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.890816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.891059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.891410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.891419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.891753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.892005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.892015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.892361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.892719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.892728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.046 qpair failed and we were unable to recover it. 00:34:19.046 [2024-05-13 20:47:34.893104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.046 [2024-05-13 20:47:34.893473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.893482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.893830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.894184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.894193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.894531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.894845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.894853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.895183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.895563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.895572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.895939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.896290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.896299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.896645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.896842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.896852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.897212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.897543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.897552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.897910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.898261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.898270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.898488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.898848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.898859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.899182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.899557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.899566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.899885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.900200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.900208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.900569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.900881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.900889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.901284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.901618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.901627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.901969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.902235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.902243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.902676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.903024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.903033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.903386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.903726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.903734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.904059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.904272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.904281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.904605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.904976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.904984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.905352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.905691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.905702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.906056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.906397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.906406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.906737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.907075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.907084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.907408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.907757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.907766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.908101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.908456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.908466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.908792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.909123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.909132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.909471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.909810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.909819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.910144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.910500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.910509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.910843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.911197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.911206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.911522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.911863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.911872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.912244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.912582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.912593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.912914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.913297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.913306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.047 qpair failed and we were unable to recover it. 00:34:19.047 [2024-05-13 20:47:34.913529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.047 [2024-05-13 20:47:34.913842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.913851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.914220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.914563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.914572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.914968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.915292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.915300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.915602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.915965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.915974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.916303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.916659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.916668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.916944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.917282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.917291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.917619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.917988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.917997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.918484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.918855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.918864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.919204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.919538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.919550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.919971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.920297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.920306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.920633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.920979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.920989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.921340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.921694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.921703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.922035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.922399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.922408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.922624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.922890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.922899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.923184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.923538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.923547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.923846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.924206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.924214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.924611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.924808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.924818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.925181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.925527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.925536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.925897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.926116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.926125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.926454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.926790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.926800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.927159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.927465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.927475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.927824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.928163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.928172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.928421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.928792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.928801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.929123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.929349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.929359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.929699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.929950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.929958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.930289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.930659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.930668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.930999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.931234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.931243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.931612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.931949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.931957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.932284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.932625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.932634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.048 [2024-05-13 20:47:34.932960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.933332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.048 [2024-05-13 20:47:34.933342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.048 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.933679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.934025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.934034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.934361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.934710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.934719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.935045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.935379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.935389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.935690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.936023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.936032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.936229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.936606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.936615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.936828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.937174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.937182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.937544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.937878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.937886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.938240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.938551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.938560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.938931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.939265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.939274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.939628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.939965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.939974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.940344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.940691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.940700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.941042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.941429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.941439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.941814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.942171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.942181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.942526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.942908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.942917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.943271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.943361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.943371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.943727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.944074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.944083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.944410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.944744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.944754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.945108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.945442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.945452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.945790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.946097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.946106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.946465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.946814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.946824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.947228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.947530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.947539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.947889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.948099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.948110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.948436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.948789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.948799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.949141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.949437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.949447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.949758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.950128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.049 [2024-05-13 20:47:34.950138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.049 qpair failed and we were unable to recover it. 00:34:19.049 [2024-05-13 20:47:34.950478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.950830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.950840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.951087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.951398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.951408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.951760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.952108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.952118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.952366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.952666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.952675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.953024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.953363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.953373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.953547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.953907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.953917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.954264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.954600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.954610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.954953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.955335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.955345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.955598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.955907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.955916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.956120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.956419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.956429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.956679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.956967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.956975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.957320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.957661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.957671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.957983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.958351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.958360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.958709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.959083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.959091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.959422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.959777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.959785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.960119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.960532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.960541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.960902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.961134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.961142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.961334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.961656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.961665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.961777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.962150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.962159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.962548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.962906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.962915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.963278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.963615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.963624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.963833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.964143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.964152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.964381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.964725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.964734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.964967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.965342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.965351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.965700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.966061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.966069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.966408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.966761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.966770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.967094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.967360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.967369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.967772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.968058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.968066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.968401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.968795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.968804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.050 qpair failed and we were unable to recover it. 00:34:19.050 [2024-05-13 20:47:34.969144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.969461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.050 [2024-05-13 20:47:34.969470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-13 20:47:34.969822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-13 20:47:34.970196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.051 [2024-05-13 20:47:34.970205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.051 qpair failed and we were unable to recover it. 00:34:19.051 [2024-05-13 20:47:34.970445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.970811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.970821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.971166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.971490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.971500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.971844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.972175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.972184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.972472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.972847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.972857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.973234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.973575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.973584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.973949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.974328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.974338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.974677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.975008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.975017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.975348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.975734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.975743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.976091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.976256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.976266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.976669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.977003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.977012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.977388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.977778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.977787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.978040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.978375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.978385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.978721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.979010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.979019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.979367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.979564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.979574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.979916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.980269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.980277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.980501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.980766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.980776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.981111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.981424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.981433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.981753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.982066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.982075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.982405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.982776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.982784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.983103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.983370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.983379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.321 qpair failed and we were unable to recover it. 00:34:19.321 [2024-05-13 20:47:34.983706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.321 [2024-05-13 20:47:34.984072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.984081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.984420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.984772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.984780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.985118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.985491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.985501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.985715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.986056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.986066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.986400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.986653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.986662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.987010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.987346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.987355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.987687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.988050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.988058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.988407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.988757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.988766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.989128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.989437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.989446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.989775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.990142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.990151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.990555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.990928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.990937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.991263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.991619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.991628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.991954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.992325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.992334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.992700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.993073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.993082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.993411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.993756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.993764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.994084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.994429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.994439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.994777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.995125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.995133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.995457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.995844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.995852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.996198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.996549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.996558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.996912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.997265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.997273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.997640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.997992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.998002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.998350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.998686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.998694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.999060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.999448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:34.999457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:34.999736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.000069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.000080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:35.000413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.000756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.000764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:35.001086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.001457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.001466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:35.001798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.002084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.002093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:35.002420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.002757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.002765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:35.003106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.003479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.003488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.322 qpair failed and we were unable to recover it. 00:34:19.322 [2024-05-13 20:47:35.003815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.004193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.322 [2024-05-13 20:47:35.004202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.004608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.004946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.004955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.005304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.005638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.005648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.006012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.006370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.006380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.006730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.007061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.007072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.007384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.007776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.007785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.008164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.008475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.008484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.008810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.009142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.009150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.009477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.009799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.009807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.010033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.010189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.010199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.010541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.010741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.010750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.011101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.011414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.011423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.011853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.012189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.012197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.012635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.012974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.012983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.013362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.013714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.013725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.014050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.014421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.014431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.014785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.015158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.015166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.015398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.015731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.015739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.016087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.016434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.016443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.016641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.016999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.017008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.017363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.017539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.017549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.017890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.018241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.018250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.018597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.018962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.018970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.019331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.019661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.019670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.019995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.020329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.020341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.020668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.021034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.021044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.021257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.021582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.021592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.021793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.022095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.022104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.022415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.022790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.022799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.023498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.323 [2024-05-13 20:47:35.023508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.323 qpair failed and we were unable to recover it. 00:34:19.323 [2024-05-13 20:47:35.023902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.024238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.024246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.024645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.025030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.025040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.025384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.025740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.025749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.026119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.026459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.026469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.026787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.027122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.027130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.027459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.027819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.027828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.028054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.028419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.028428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.028758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.029124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.029133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.029480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.029817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.029826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.030165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.030505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.030514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.030847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.031057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.031065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.031407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.031774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.031783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.032126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.032487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.032496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.032828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.033208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.033216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.033410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.033720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.033729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.034082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.034459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.034469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.034702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.035041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.035051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.035434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.035818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.035827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.036184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.036536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.036546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.036933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.037273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.037582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.037967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.037977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.038310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.038659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.038669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.038884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.039188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.039197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.039602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.039941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.039950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.040197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.040529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.040539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.040828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.041185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.041195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.041582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.041896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.041905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.042232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.042632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.042641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.042964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.043298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.043307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.324 [2024-05-13 20:47:35.043677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.044024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.324 [2024-05-13 20:47:35.044033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.324 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.044383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.044765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.044774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.045011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.045338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.045347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.045697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.046080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.046088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.046414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.046792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.046801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.047127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.047471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.047480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.047803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.048168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.048176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.048500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.048810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.048819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.049163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.049543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.049552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.049884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.050217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.050227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.050471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.050716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.050724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.050786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.051109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.051119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.051489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.051836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.051845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.052582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.052591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.052928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.053282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.053291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.053608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.054008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.054017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.054255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.054639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.054648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.054975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.055327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.055337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.055721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.056085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.056093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.056384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.056711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.056720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.056941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.057285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.057294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.057620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.057962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.057970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.058303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.058486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.058496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.058830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.059162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.059171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.325 [2024-05-13 20:47:35.059512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.059844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.325 [2024-05-13 20:47:35.059853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.325 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.060199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.060429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.060438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.060776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.061084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.061093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.061416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.061756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.061765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.062141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.062492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.062501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.062850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.063207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.063216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.063534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.063906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.063915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.064240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.064609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.064619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.064948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.065268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.065277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.065628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.065965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.065974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.066299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.066661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.066670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.067009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.067372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.067381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.067593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.067897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.067906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.068309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.068638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.068647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.069015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.069288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.069297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.069638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.069972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.069981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.070352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.070620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.070629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.070957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.071333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.071343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.071684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.072008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.072017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.072367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.072701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.072709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.073036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.073399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.073408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.073733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.074071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.074079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.074407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.074792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.074800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.075131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.075476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.075486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.075842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.076139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.076147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.076514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.076880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.076889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.077098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.077322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.077332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.077563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.077769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.077779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.078103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.078438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.078447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.078875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.079209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.079218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.326 qpair failed and we were unable to recover it. 00:34:19.326 [2024-05-13 20:47:35.079574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.326 [2024-05-13 20:47:35.079862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.079871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.080225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.080570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.080579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.080989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.081319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.081328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.081681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.082024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.082033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.082235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.082539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.082549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.082878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.083249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.083258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.083600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.083964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.083973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.084323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.084657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.084666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.084998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.085260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.085269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.085612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.085949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.085958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.086285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.086651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.086660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.087004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.087381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.087391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.087712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.088085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.088094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.088299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.088639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.088649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.088972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.089348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.089357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.089681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.089949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.089958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.090362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.090695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.090704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.091045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.091342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.091351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.091688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.092060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.092069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.092396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.092749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.092758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.093085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.093421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.093430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.093770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.094115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.094124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.094450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.094810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.094819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.095099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.095455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.095465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.095794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.096059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.096067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.096391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.096748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.096757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.096945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.097379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.097388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.097713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.098083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.098093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.098429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.098858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.098866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.099157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.099485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.327 [2024-05-13 20:47:35.099494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.327 qpair failed and we were unable to recover it. 00:34:19.327 [2024-05-13 20:47:35.099829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.100206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.100215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.100653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.100998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.101008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.101322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.101532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.101542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.101848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.102176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.102185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.102511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.102884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.102892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.103222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.103479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.103488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.103866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.104200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.104208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.104554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.104897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.104906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.105270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.105637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.105646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.105985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.106353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.106362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.106690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.107058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.107067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.107439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.107825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.107835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.108213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.108585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.108596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.108921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.109174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.109183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.109531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.109901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.109910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.110237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.110608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.110617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.111016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.111341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.111350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.111696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.111917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.111926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.112260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.112511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.112520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.112881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.113219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.113227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.113570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.113940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.113949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.114279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.114647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.114656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.114985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.115365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.115376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.115724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.116107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.116115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.116448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.116767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.116776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.117126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.117481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.117490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.117818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.118184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.118193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.118530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.118899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.118908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.119236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.119572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.119581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.328 [2024-05-13 20:47:35.119813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.120041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.328 [2024-05-13 20:47:35.120050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.328 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.120394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.120590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.120600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.120925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.121276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.121284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.121629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.122001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.122012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.122361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.122726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.122735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.123110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.123460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.123469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.123827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.124183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.124192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.124523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.124876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.124885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.125088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.125417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.125427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.125795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.126029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.126038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.126289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.126675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.126684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.127025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.127401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.127410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.127733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.128117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.128452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.128789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.128801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.129121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.129456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.129465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.129791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.130160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.130169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.130475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.130847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.130856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.131238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.131546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.131555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.131903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.132237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.132245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.132617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.132974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.132983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.133324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.133663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.133672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.134039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.134377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.134386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.134713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.135067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.135076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.135442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.135653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.135662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.136000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.136339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.136348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.136677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.137045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.137054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.137397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.137743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.137752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.138080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.138450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.138459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.138824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.139227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.139235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.139586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.139957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.139966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.329 qpair failed and we were unable to recover it. 00:34:19.329 [2024-05-13 20:47:35.140299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.329 [2024-05-13 20:47:35.140646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.140656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.141026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.141363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.141372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.141684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.142062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.142072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.142448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.142785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.142794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.143140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.143445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.143454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.143798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.144101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.144109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.144462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.144826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.144835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.145163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.145498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.145508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.145876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.146218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.146227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.146575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.146920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.146928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.147254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.147611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.147620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.147956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.148330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.148339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.148683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.149036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.149045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.149376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.149763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.149772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.150089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.150462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.150471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.150762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.151123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.151132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.151463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.151692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.151701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.152028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.152404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.152413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.152662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.153037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.153046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.153418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.153841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.153850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.154149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.154486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.154495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.154832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.155194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.155203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.155558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.155927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.155936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.156139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.156477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.156486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.156811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.157197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.157207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.330 [2024-05-13 20:47:35.157579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.157915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.330 [2024-05-13 20:47:35.157925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.330 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.158276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.158639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.158649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.159023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.159275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.159284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.159615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.159993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.160003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.160373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.160719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.160728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.161084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.161488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.161497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.161882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.162253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.162261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.162594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.162966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.162974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.163282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.163607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.163616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.163941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.164298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.164306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.164635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.164963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.164973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.165319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.165525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.165534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.165877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.166211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.166220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.166605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.166951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.166960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.167330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.167585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.167594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.167929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.168302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.168311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.168678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.169040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.169049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.169400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.169777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.169786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.170117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.170486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.170495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.170902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.171230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.171239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.171583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.171948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.171957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.172315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.172660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.172669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.172912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.173256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.173265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.173606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.173946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.173956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.174170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.174547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.174557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.174760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.175123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.175132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.175468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.175819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.175828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.176154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.176422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.176431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.176757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.177090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.177098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.177401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.177725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.177734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.331 qpair failed and we were unable to recover it. 00:34:19.331 [2024-05-13 20:47:35.178056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.331 [2024-05-13 20:47:35.178419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.178428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.178754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.179026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.179035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.179368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.179703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.179712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.180046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.180283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.180292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.180549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.180923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.180932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.181209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.181449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.181458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.181789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.182160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.182169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.182499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.182811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.182820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.183151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.183518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.183527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.183710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.184011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.184020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.184208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.184582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.184591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.184815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.185188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.185197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.185547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.185922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.185931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.186257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.186650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.186659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.186998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.187357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.187367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.187582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.187997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.188007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.188347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.188723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.188732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.189059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.189432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.189442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.189695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.189950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.189958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.190294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.190710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.190719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.191055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.191430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.191439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.191767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.192034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.192042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.192370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.192725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.192734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.193027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.193361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.193370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.193708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.194017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.194026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.194387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.194723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.194732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.195066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.195234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.195244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.195604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.195939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.195947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.196276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.196647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.196656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.197367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.332 [2024-05-13 20:47:35.197377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.332 qpair failed and we were unable to recover it. 00:34:19.332 [2024-05-13 20:47:35.197756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.198130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.198139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.198463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.198663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.198672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.199001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.199380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.199389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.199736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.200055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.200064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.200343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.200686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.200695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.201026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.201267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.201276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.201638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.201971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.201980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.202233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.202584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.202593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.202926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.203117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.203126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.203420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.203764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.203773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.204101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.204467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.204477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.204827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.205016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.205025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.205241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.205588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.205597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.205949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.206366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.206375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.206716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.207038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.207048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.207395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.207751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.207760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.208127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.208540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.208549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.208873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.209238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.209247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.209580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.209947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.209955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.210308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.210664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.210674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.210998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.211368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.211378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.211701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.211904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.211914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.212236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.212611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.212621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.212962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.213198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.213207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.213545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.213878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.213886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.214183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.214535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.214544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.214879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.215241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.215250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.215602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.215812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.215822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.216023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.216378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.216388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.333 qpair failed and we were unable to recover it. 00:34:19.333 [2024-05-13 20:47:35.216733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.333 [2024-05-13 20:47:35.217069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.217080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.217405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.217776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.217784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.218183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.218520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.218530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.218860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.219231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.219240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.219428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.219734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.219743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.220154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.220493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.220502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.220856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.221215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.221224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.221557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.221928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.221937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.222262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.222633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.222643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.222977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.223355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.223364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.223715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.224048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.224059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.224391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.224704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.224713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.224970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.225351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.225360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.225697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.226060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.226069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.226419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.226703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.226712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.227054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.227399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.227408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.227739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.228111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.228120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.228448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.228809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.228818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.229173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.229520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.229529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.229783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.230008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.230018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.230348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.230724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.230735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.231101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.231321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.231331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.231664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.232002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.232011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.232110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.232438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.232447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.232771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.233148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.334 [2024-05-13 20:47:35.233157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.334 qpair failed and we were unable to recover it. 00:34:19.334 [2024-05-13 20:47:35.233509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.233883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.233892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.234254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.234506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.234515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.234839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.235218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.235227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.235601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.235962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.235971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.236287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.236449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.236459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.236711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.237054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.237066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.237422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.237668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.237676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.237987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.238298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.238307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.238565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.238922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.238930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.239303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.239528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.239537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.239920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.240165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.240174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.240523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.240739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.240748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.241125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.241466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.241476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.241839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.242203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.242212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.242608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.242963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.242971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.243325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.243683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.243693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.244064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.244405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.244414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.244704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.245034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.245043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.245368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.245679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.245688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.246018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.246380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.246390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.246724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.247083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.247091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.247450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.247818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.247827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.248160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.248535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.248544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.248821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.249201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.249210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.249554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.249927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.249935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.250264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.250473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.250483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.250805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.251142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.251151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.251500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.251838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.251847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.252171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.252523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.252532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.335 qpair failed and we were unable to recover it. 00:34:19.335 [2024-05-13 20:47:35.252741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.253117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.335 [2024-05-13 20:47:35.253125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-05-13 20:47:35.253450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.253692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.253701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-05-13 20:47:35.253984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.254340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.254349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-05-13 20:47:35.254677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.255049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.255058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-05-13 20:47:35.255243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.255558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.255567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-05-13 20:47:35.255768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.255981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.336 [2024-05-13 20:47:35.255991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.336 qpair failed and we were unable to recover it. 00:34:19.336 [2024-05-13 20:47:35.256359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-05-13 20:47:35.256714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-05-13 20:47:35.256724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.606 [2024-05-13 20:47:35.257108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-05-13 20:47:35.257462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.606 [2024-05-13 20:47:35.257472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.606 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.257709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.258084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.258093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.258469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.258843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.258852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.259188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.259556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.259566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.259935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.260273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.260283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.260616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.260951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.260961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.261320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.261651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.261660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.261914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.262295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.262304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.262687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.263035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.263045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.263396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.263602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.263612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.263991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.264319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.264329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.264676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.265034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.265043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.265409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.265751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.265760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.266111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.266444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.266453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.266782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.267038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.267047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.267382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.267757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.267767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.268095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.268477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.268486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.268839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.269208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.269217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.269562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.269928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.269937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.270277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.270612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.270621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.270949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.271325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.271335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.271687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.272056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.272065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.272390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.272790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.272800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.273155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.273507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.273516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.273894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.274268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.274278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.274619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.274950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.274960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.275152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.275516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.275526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.275855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.276222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.276231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.276568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.276791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.276800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.277135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.277507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.277516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.607 qpair failed and we were unable to recover it. 00:34:19.607 [2024-05-13 20:47:35.277891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.607 [2024-05-13 20:47:35.278264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.278273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.278639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.279036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.279045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.279425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.279796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.279804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.280136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.280498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.280509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.280836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.281211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.281221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.281578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.281910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.281920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.282252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.282595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.282605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.282954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.283179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.283188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.283434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.283770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.283779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.284129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.284509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.284519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.284855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.285216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.285225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.285579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.285953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.285961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.286289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.286678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.286687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.287016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.287390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.287399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.287748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.288130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.288139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.288465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.288840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.288849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.289172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.289515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.289525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.289856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.290227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.290236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.290437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.290740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.290749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.291103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.291463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.291473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.291724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.292017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.292026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.292456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.292822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.292831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.293063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.293423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.293432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.293779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.294139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.294149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.294476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.294839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.294848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.295199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.295647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.295656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.296012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.296394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.296403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.296736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.296901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.296911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.297246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.297489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.297499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.608 [2024-05-13 20:47:35.297781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.298121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.608 [2024-05-13 20:47:35.298130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.608 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.298479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.298867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.298876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.299207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.299562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.299572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.299953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.300289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.300298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.300679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.300911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.300920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.301289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.301635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.301644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.301979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.302347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.302357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.302622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.302913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.302923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.303164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.303541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.303550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.303927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.304298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.304307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.304544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.304914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.304922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.305247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.305643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.305653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.305939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.306142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.306152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.306558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.306893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.306902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.307247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.307658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.307667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.308002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.308369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.308378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.308736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.309070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.309079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.309317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.309655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.309664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.309996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.310360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.310370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.310718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.311100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.311110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.311463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.311824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.311833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.312163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.312538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.312548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.312890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.313260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.313269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.313639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.313992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.314001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.314360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.314719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.314728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.315062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.315432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.315442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.315804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.316138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.316147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.316496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.316857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.316866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.317193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.317457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.317466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.317770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.317958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.317967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.609 qpair failed and we were unable to recover it. 00:34:19.609 [2024-05-13 20:47:35.318411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.609 [2024-05-13 20:47:35.318732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.318740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.319112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.319489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.319500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.319831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.320053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.320061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.320444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.320859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.320869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.321299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.321646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.321655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.321994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.322225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.322233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.322402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.322662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.322671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.323023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.323357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.323367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.323720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.324078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.324086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.324414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.324614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.324623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.324969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.325317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.325327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.325707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.326081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.326093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.326348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.326690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.326699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.327030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.327406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.327415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.327740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.328115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.328124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.328318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.328652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.328661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.328957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.329290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.329299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.329511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.329860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.329869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.330054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.330380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.330389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.330781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.331134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.331143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.331499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.331807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.331816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.332152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.332335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.332347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.332704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.333080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.333089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.333502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.333891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.333900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.334233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.334581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.334591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.334920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.335282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.335291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.335508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.335905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.335914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.336237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.336587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.336596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.336981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.337208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.337217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.610 [2024-05-13 20:47:35.337276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.337616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.610 [2024-05-13 20:47:35.337625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.610 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.337954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.338284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.338294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.338659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.338891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.338902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.339253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.339442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.339452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.339814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.340148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.340157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.340508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.340841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.340850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.341263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.341510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.341519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.341873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.342222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.342231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.342572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.342958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.342968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.343298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.343637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.343648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.343995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.344366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.344375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.344624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.344988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.344997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.345326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.345695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.345705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.346055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.346346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.346357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.346743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.347094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.347104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.347401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.347742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.347751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.348076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.348426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.348435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.348768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.349091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.349099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.349467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.349676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.349686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.349991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.350292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.350301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.350667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.351002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.351011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.351366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.351707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.351715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.352043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.352409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.352418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.352777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.353110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.611 [2024-05-13 20:47:35.353120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.611 qpair failed and we were unable to recover it. 00:34:19.611 [2024-05-13 20:47:35.353464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.353797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.353805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.354127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.354469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.354478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.354812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.355122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.355131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.355486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.355843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.355851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.356176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.356516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.356525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.356854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.357230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.357239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.357553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.357901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.357910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.358329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.358667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.358676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.359048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.359376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.359386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.359737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.360070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.360079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.360412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.360760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.360769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.361130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.361511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.361520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.361830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.362208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.362216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.362574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.362787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.362796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.363149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.363519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.363529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.363852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.364224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.364233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.364574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.364933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.364941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.365264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.365601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.365610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.365938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.366303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.366316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.366665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.367013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.367022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.367430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.367697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.367706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.368078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.368462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.368471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.368812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.369151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.369161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.369503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.369875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.369885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.370215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.370552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.370561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.370774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.371118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.371127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.371467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.371802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.371811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.372014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.372390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.372400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.372799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.373156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.373164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.612 qpair failed and we were unable to recover it. 00:34:19.612 [2024-05-13 20:47:35.373506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.373879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.612 [2024-05-13 20:47:35.373888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.374216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.374555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.374566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.374934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.375267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.375276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.375575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.375941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.375951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.376245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.376633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.376643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.376834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.377168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.377178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.377527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.377820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.377830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.378197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.378553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.378562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.378893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.379263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.379272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.379677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.380035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.380044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.380398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.380741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.380751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.381117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.381404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.381414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.381783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.382007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.382016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.382338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.382657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.382666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.383024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.383403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.383413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.383770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.384023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.384032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.384327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.384690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.384699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.385034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.385305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.385318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.385714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.386067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.386077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.386445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.386817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.386826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.387153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.387480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.387489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.387763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.388132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.388141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.388362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.388770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.388779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.389109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.389439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.389448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.389772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.390009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.390018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.390343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.390688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.390697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.390937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.391100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.391109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.391460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.391829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.391837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.392171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.392531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.392541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.392893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.393293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.393302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.613 qpair failed and we were unable to recover it. 00:34:19.613 [2024-05-13 20:47:35.393648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.613 [2024-05-13 20:47:35.393987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.393996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.394322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.394662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.394670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.395091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.395251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.395261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.395494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.395833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.395843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.396191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.396413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.396424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.396768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.397201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.397210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.397535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.397740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.397750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.397998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.398296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.398304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.398641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.399014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.399022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.399367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.399673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.399682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.400019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.400303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.400312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.400619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.400993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.401002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.401215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.401478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.401488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.401832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.402149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.402158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.402481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.402836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.402845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.403052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.403353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.403362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.403703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.404053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.404062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.404428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.404768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.404776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.405118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.405372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.405381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.405713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.406078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.406087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.406439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.406847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.406856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.407201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.407580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.407590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.407838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.408127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.408137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.408472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.408825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.408834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.409180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.409493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.409503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.409855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.410195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.410204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.410620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.410840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.410849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.411206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.411578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.411587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.411916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.412165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.412173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.412514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.412846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.614 [2024-05-13 20:47:35.412854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.614 qpair failed and we were unable to recover it. 00:34:19.614 [2024-05-13 20:47:35.413196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.413542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.413551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.413916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.414292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.414301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.414641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.415028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.415038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.415407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.415744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.415753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.416092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.416460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.416469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.416806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.417051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.417060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.417435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.417770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.417779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.418102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.418482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.418492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.418884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.419214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.419223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.419571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.419904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.419913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.420236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.420580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.420590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.420886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.421110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.421118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.421321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.421666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.421675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.421997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.422375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.422384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.422740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.423078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.423087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.423272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.423595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.423604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.423928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.424301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.424309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.424688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.425041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.425050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.425505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.425828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.425837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.426205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.426608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.426617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.426940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.427276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.427287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.427671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.428005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.428014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.428217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.428546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.428556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.428917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.429256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.429265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.615 [2024-05-13 20:47:35.429607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.429929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.615 [2024-05-13 20:47:35.429939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.615 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.430309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.430656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.430664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.430915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.431284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.431293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.431613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.431963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.431972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.432294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.432667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.432676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.433009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.433373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.433383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.433750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.434089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.434099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.434467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.434801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.434810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.435138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.435481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.435490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.435848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.436204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.436213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.436410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.436729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.436738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.437109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.437442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.437451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.437780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.438048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.438056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.438385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.438645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.438654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.439007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.439351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.439360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.439738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.440091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.440100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.440451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.440793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.440804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.441132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.441438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.441447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.441802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.442157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.442166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.442494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.442830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.442838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.443168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.443533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.443542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.443883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.444261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.444271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.444639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.444968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.444978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.445346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.445691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.445700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.446026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.446405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.446414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.446741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.447109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.447118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.447460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.447811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.447822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.448147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.448373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.448383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.448730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.449097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.449105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.449526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.449851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.449860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.616 qpair failed and we were unable to recover it. 00:34:19.616 [2024-05-13 20:47:35.450186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.616 [2024-05-13 20:47:35.450563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.450572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.450895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.451265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.451273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.451613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.451966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.451975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.452318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.452659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.452669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.452867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.453187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.453197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.453539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.453827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.453836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.454190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.454527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.454536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.454835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.455210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.455219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.455580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.455880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.455889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.456225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.456586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.456595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.456883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.457205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.457214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.457612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.457969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.457978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.458328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.458636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.458645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.458936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.459278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.459287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.459721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.460054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.460063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.460432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.460804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.460813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.461138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.461375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.461384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.461740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.462077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.462086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.462434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.462754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.462763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.463057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.463415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.463424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.463838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.464170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.464178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.464593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.464928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.464937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.465278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.465609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.465618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.466034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.466387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.466397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.466756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.467089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.467097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.467502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.467730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.467739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.468059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.468397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.468406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.468689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.469023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.469032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.469243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.469564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.469573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.617 qpair failed and we were unable to recover it. 00:34:19.617 [2024-05-13 20:47:35.469856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.617 [2024-05-13 20:47:35.470222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.470231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.470578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.470805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.470815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.471194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.471573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.471582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.471952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.472293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.472302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.472638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.472875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.472884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.473169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.473568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.473577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.473911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.474281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.474289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.474565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.474897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.474906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.475270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.475623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.475633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.475987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.476273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.476281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.476603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.476981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.476990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.477340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.477674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.477682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.478013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.478223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.478233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.478484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.478814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.478823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.479193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.479570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.479579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.479905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.480276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.480285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.480531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.480783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.480792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.481135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.481398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.481407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.481818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.482149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.482158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.482514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.482736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.482746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.483072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.483405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.483414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.483718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.484031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.484039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.484367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.484710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.484718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.485006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.485267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.485276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.485644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.485976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.485986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.486206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.486390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.486399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.486802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.487134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.487143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.487495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.487848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.487856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.488182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.488554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.488563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.488887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.489259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.618 [2024-05-13 20:47:35.489268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.618 qpair failed and we were unable to recover it. 00:34:19.618 [2024-05-13 20:47:35.489597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.489962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.489971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.490319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.490672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.490681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.491051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.491398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.491407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.491738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.491987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.491996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.492241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.492555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.492564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.492888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.493263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.493272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.493608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.494014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.494023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.494363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.494719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.494728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.495081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.495458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.495467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.495762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.496107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.496116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.496443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.496787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.496796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.497120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.497330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.497340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.497685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.498025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.498034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.498359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.498710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.498719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.499055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.499278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.499287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.499489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.499786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.499795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.500116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.500505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.500514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.500921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.501252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.501261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.501607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.501938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.501947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.502295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.502680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.502689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.502751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.503079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.503089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.503419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.503786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.503795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.504085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.504429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.504438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.504769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.505087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.505095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.505435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.505785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.505793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.506124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.506490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.506500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.506873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.507120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.507130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.507480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.507783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.507792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.508165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.508468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.508477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.619 [2024-05-13 20:47:35.508839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.509212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.619 [2024-05-13 20:47:35.509221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.619 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.509554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.509771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.509780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.510124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.510502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.510511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.510844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.511184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.511192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.511528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.511874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.511882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.512209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.512581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.512590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.512916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.513295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.513304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.513652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.514007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.514016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.514365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.514698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.514707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.515027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.515333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.515343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.515668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.516044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.516053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.516385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.516649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.516658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.517007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.517347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.517356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.517681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.518035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.518044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.518293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.518516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.518526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.518861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.519016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.519025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.519363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.519716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.519724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.520051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.520263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.520272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.520584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.520929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.520938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.521143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.521528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.521537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.521869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.522239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.522248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.522645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.522979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.522989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.523336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.523682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.523691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.524018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.524380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.524390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.524776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.525027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.525036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.620 [2024-05-13 20:47:35.525403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.525773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.620 [2024-05-13 20:47:35.525782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.620 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.526126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.526471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.526480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.526821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.527156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.527165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.527527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.527750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.527759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.528101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.528462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.528472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.528830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.529201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.529210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.529568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.529772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.529780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.530142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.530477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.530486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.530731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.531112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.531120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.531462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.531814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.531823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.532194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.532442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.532451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.532803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.533091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.533100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.533451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.533858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.533867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.534197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.534573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.534582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.534913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.535239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.535250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.535614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.535965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.535973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.536197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.536522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.536531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.536745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.537068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.537077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.537338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.537709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.537717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.538063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.538439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.538448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.538823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.539163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.539172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.539400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.539779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.539788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.540118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.540342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.621 [2024-05-13 20:47:35.540352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.621 qpair failed and we were unable to recover it. 00:34:19.621 [2024-05-13 20:47:35.540655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.540910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.540921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.541163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.541503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.541514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.541761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.542109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.542117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.542365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.542727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.542736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.543059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.543284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.543292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.543705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.544063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.544072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.544320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.544667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.544676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.545045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.545402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.545411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.545749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.546086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.546095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.546484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.546832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.546842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.547202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.547454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.547463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.547805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.548138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.548149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.548392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.548594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.548604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.548938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.549144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.549153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-05-13 20:47:35.549490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.549863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-05-13 20:47:35.549871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.550261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.550511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.550520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.550828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.551143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.551153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.551508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.551878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.551886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.552234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.552592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.552601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.552936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.553293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.553301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.553644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.553996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.554005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.554250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.554664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.554675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.554860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.555174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.555184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.555531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.555865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.555874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.556224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.556538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.556547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.556873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.557188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.557197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.557513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.557828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.557836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.558246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.558488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.558498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.558771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.559104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.559113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.559556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.559894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.559903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.560165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.560518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.560527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.560965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.561254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.561263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.561617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.561957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.561966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.562334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.562696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.562705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.563033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.563417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.563426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.563758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.564074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.564082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.564389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.564694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.564703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.564941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.565299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.565308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.565677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.566046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.566054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.566426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.566725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.566733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-05-13 20:47:35.567088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-05-13 20:47:35.567421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.567431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.567790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.568140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.568148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.568481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.568745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.568753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.569083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.569329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.569339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.569684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.570085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.570094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.570446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.570793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.570802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.571146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.571504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.571513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.571841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.572149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.572157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.572490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.572852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.572860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.573191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.573570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.573579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.573901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.574256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.574265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.574615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.574971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.574980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.575319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.575651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.575659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.575830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.576042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.576050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.576391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.576690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.576699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.577073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.577406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.577415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.577780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.577953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.577962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.578308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.578670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.578678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.579003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.579380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.579389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.579700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.580068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.580077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.580404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.580760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.580769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.581096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.581467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.581477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.581826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.582166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.582175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.582464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.582834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.582843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-05-13 20:47:35.583174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.583577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-05-13 20:47:35.583586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.583757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.584131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.584140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.584398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.584759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.584768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.585099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.585433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.585443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.585798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.586106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.586114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.586499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.586857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.586865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.587224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.587471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.587480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.587821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.588168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.588176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.588500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.588817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.588826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.589157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.589533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.589542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.589879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.590249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.590257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.590461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.590798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.590806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.591161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.591434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.591444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.591776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.592142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.592152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.592503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.592695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.592704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.593047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.593294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.593303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.593720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.594052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.594060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.594399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.594733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.594741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.595048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.595422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.595431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.595760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.596096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.596105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.596431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.596815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.596824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.597147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.597466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.597475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.597823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.598139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.598147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.598517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.598813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.598821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-05-13 20:47:35.599181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-05-13 20:47:35.599517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.599526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.599869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.600251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.600261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.600595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.600931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.600941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.601311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.601650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.601659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.601946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.602256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.602265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.602607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.602959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.602969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.603324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.603653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.603663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.603920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.604128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.604138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.604463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.604820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.604829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.605175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.605548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.605557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.605806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.606070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.606079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.606410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.606740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.606748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.607075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.607409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.607418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.607741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.608119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.608128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.608431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.608758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.608767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.609102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.609354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.609368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.609724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.610058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.610066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.610390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.610594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.610603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.610959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.611307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.611323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.611727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.612076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.612085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.612477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.612839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.612848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.613191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.613420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.613429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.613661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.613995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.614004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-05-13 20:47:35.614331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.614586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-05-13 20:47:35.614594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.614920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.615257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.615267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.615603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.615982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.615992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.616340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.616501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.616511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.616829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.617164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.617173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.617498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.617870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.617879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.618208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.618454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.618463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.618818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.619122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.619130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.619469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.619798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.619806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.620130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.620507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.620517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.620723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.620949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.620959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.621311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.621657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.621666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.621995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.622368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.622378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.622727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.622991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.622999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.623332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.623664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.623673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.624026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.624360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.624370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.624716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.625069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.625078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.625449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.625792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.625801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.626152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.626499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.626508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.626870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.627234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.627243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.627579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.627951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.627960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-05-13 20:47:35.628321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-05-13 20:47:35.628504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.628515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.628848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.629110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.629119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.629469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.629810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.629819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.630187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.630644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.630654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.630880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.631108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.631117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.631446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.631795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.631803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.632135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.632464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.632473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.632815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.633150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.633159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.633490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.633672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.633681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.633999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.634346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.634356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.634674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.635045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.635056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.635369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.635705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.635714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.635975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.636354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.636363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.636697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.637041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.637050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.637376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.637646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.637655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.638007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.638356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.638365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.638676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.639045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.639054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.639429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.639792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.639801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.640126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.640497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.640506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.640712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.640946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.640955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.641274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.641687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.641699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.642042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.642402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.642412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.642662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.643026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.643035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.643232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.643570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.643579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.643907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.644276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.644285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.644620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.644993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.645002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.645338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.645672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.645682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-05-13 20:47:35.646020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.646354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-05-13 20:47:35.646363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.646685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.647032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.647042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.647369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.647740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.647750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.647942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.648320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.648333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.648684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.649057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.649067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.649396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.649627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.649637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.649958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.650179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.650188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.650550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.650893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.650902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.651231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.651601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.651611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.651961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.652299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.652308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.652629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.653025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.653034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.653371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.653732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.653741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.654067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.654468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.654477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.654832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.655179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.655190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.655517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.655718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.655727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.656043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.656417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.656426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.656761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.657102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.657111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.657445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.657861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.657870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.658195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.658525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.658534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.658901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.659131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.659140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.659482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.659841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.659849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.660049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.660365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.660376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.660716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.661065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.661075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.661422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.661707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.661715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.662042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.662264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.662274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.662636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.662953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.662962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-05-13 20:47:35.663290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-05-13 20:47:35.663651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.663660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.663986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.664356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.664365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.664591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.664965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.664975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.665327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.665654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.665663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.665996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.666368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.666378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.666728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.667131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.667140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.667331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.667677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.667686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.668012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.668384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.668393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.668727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.669100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.669109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.669350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.669689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.669698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.670026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.670391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.670401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.670728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.671061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.671070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.671409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.671639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.671648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.672023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.672402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.672412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.672795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.673110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.673119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.673443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.673670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.673679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.674027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.674208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.674217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.674569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.674946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.674954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.675283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.675506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.675516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.675837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.676207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.676215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.676572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.676930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.676939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.677302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.677656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.677666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.677998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.678201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.678211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.678550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.678803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.678812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.679142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.679526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.679535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.679882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.680234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-05-13 20:47:35.680243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-05-13 20:47:35.680586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.680807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.680816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.681163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.681495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.681504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.681839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.682196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.682205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.682578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.682904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.682913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.683243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.683544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.683553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.683879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.684146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.684154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.684522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.684893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.684902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.685245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.685598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.685607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.685937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.686323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.686333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.686683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.687009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.687018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.687387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.687730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.687739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.688122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.688487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.688496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.688834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.689202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.689210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.689647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.689972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.689981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.690319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.690657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.690666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.690943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.691327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.691336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.691739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.692077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.692085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.692434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.692782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.692791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.693110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.693443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.693453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.693736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.694089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.694098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.694541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.694862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.694871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.695182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.695448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.695457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.695838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.696171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.696179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.696356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.696778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.696788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.697114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.697448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.697458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.697785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.698126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.698135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.698461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.698839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-05-13 20:47:35.698848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-05-13 20:47:35.699174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.699425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.699434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.699766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.700144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.700153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.700481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.700829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.700838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.701162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.701427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.701436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.701845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.702183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.702192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.702372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.702706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.702715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.703071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.703425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.703436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.703762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.704079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.704089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.704431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.704774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.704785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.705113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.705468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.705477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.705823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.706174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.706183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.706539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.706906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.706915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.707257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.707604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.707613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.707815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.708113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.708122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.708452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.708738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.708747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.709084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.709451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.709461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.709770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.710121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.710130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.710463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.710818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.710826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.711167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.711503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.711512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.711843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.712162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.712170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-05-13 20:47:35.712519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.712861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-05-13 20:47:35.712869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.713138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.713432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.713441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.713809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.714191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.714201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.714538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.714869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.714878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.715062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.715305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.715319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.715684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.715908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.715919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.716288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.716626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.716637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.716954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.717298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.717307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.717553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.717886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.717896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.718239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.718575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.718585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.718932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.719152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.719161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.719487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.719848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.719858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.720254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.720568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.720578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.720835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.721164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.721173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.721530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.721872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.721882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.722106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.722417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.722427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.722620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.722928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.722937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.723290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.723646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.723655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.724002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.724359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.724369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.724610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.724945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.724954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.725157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.725334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.725343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.725679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.726032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.726041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.726414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.726673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.726682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.727019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.727375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.727385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.727766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.728117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.728125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.728469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.728842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.728851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-05-13 20:47:35.729202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.729551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-05-13 20:47:35.729560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.729937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.730294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.730303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.730640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.730999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.731007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.731359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.731694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.731703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.732049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.732426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.732435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.732740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.733029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.733037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.733388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.733741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.733749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.734161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.734430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.734440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.734700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.735144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.735153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.735523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.735864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.735874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.736224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.736416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.736426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.736785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.737025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.737034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.737395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.737587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.737597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.737930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.738292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.738300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.738648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.739021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.739030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.739376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.739727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.739736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.740031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.740274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.740283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.740630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.740983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.740992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.741295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.741660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.741670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.742031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.742408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.742420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.742775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.743079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.743087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.743378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.743750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.743758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.743959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.744197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.744206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.744537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.744900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.744908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.745248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.745618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.745627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.745983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.746327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.746337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-05-13 20:47:35.746628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-05-13 20:47:35.746855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.746864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.747198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.747531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.747549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.747906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.748156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.748165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.748501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.748877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.748888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.749214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.749456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.749465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.749813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.750148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.750158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.750508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.750872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.750881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.751254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.751430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.751439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.751767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.752099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.752108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.752351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.752664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.752673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.753021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.753280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.753290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.753632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.753876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.753886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.754244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.754610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.754620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.754963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.755328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.755339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.755512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.755848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.755857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.756174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.756425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.756434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.756785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.757122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.757131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.757512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.757861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.757870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.758061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.758398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.758408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.758618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.758918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.758927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.759254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.759455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.759465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.759720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.760089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.760098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.760427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.760783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.760792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.761120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.761486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-05-13 20:47:35.761498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-05-13 20:47:35.761880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.762227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.762237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.762502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.762852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.762861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.763190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.763565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.763575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.763855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.764091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.764100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.764427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.764803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.764812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.765141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.765334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.765344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.765667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.766035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.766044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.766247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.766560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.766569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.766862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.767164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.767173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.767506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.767843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.767851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.768186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.768551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.768561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.768908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.769246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.769255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.769544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.769774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.769782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.770121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.770460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.770469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.770806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.771134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.771143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.771470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.771819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.771828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.772151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.772528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.772538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.772779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.773114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.773123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.773452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.773813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.773822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.774147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.774490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.774499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.774803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.775185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.775194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.775532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.775912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.775922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.776245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.776558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.776567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.776891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.777274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.777283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.777631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.778010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.778019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.778349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.778687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.778696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-05-13 20:47:35.779040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-05-13 20:47:35.779410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.779419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.779752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.780120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.780129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.780533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.780879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.780888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.781215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.781406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.781416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.781749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.782124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.782132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.782464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.782809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.782818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.783153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.783498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.783507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.783849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.784195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.784203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.784528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.784750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.784760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.785106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.785459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.785468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.785773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.786144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.786153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.786499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.786876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.786885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.787211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.787563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.787572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.787938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.788332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.788341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.788614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.788928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.788937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.789317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.789637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.789646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.789899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.790238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.790247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.790680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.791010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.791019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.791293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.791696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.791705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.792049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.792405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.792420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.792782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.793119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.793128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.793467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.793829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.793838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-05-13 20:47:35.794165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.794517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-05-13 20:47:35.794528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.794772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.795066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.795074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.795303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.795608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.795617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.795955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.796326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.796336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.796688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.797045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.797054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.797393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.797640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.797648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.797877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.798210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.798219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.798575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.798910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.798919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.799275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.799612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.799621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.799953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.800144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.800154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.800508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.800862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.800871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.801231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.801604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.801613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.801927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.802232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.802240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.802483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.802825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.802833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.803066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.803401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.803410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.803773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.804111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.804120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.804467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.804818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.804828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.805163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.805500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.805509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.805867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.806201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.806210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.806446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.806640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.806650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.806995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.807330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.807340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.807690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.808032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.808042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.808392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.808764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.808773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.809118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.809496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.809505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.809834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.810205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.810214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.810464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.810813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.810822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.811145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.811349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.811358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.811657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.811997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-05-13 20:47:35.812006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-05-13 20:47:35.812263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.812609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.812618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.812950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.813300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.813308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.813629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.813976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.813986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.814320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.814551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.814561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.814927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.815278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.815287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.815538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.815874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.815883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.816206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.816563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.816572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.816935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.817283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.817292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.817572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.817922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.817931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.818301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.818667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.818677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.819003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.819368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.819378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.819556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.819808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.819817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.820036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.820377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.820386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.820755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.821139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.821149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.821478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.821836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.821845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.822178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.822531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.822541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.822746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.823050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.823058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.823398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.823749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.823757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.823960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.824281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.824290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.824512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.824848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.824856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.825087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.825326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-05-13 20:47:35.825336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-05-13 20:47:35.825673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.826021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.826033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.826399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.826654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.826663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.827023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.827361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.827371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.827677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.828067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.828077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.828419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.828786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.828796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.829192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.829423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.829433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.829812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.830160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.830169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.830384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.830644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.830652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.830982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.831108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.831117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.178 qpair failed and we were unable to recover it. 00:34:20.178 [2024-05-13 20:47:35.831448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.178 [2024-05-13 20:47:35.831797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.831805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.831958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.832260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.832269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.832586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.832965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.832973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.833321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.833660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.833669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.833917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.834254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.834264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.834602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.834954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.834963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.835227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.835617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.835626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.835953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.836257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.836266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.836636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.836935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.836945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.837316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.837672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.837680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.837929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.838261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.838270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.838622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.838965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.838974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.839342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.839650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.839659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.839960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.840328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.840338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.840524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.840858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.840867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.841196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.841551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.841561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.841903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.842235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.842243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.842649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.842956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.842965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.843327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.843704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.843713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.844061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.844387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.844396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.844746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.845097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.845106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.845511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.845734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.845744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.846083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.846430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.846439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.846818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.847123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.847132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.847468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.847815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.847826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.848154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.848500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.848510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.848863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.849057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.849066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.849398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.849743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.849752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.850104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.850455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.850464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.179 [2024-05-13 20:47:35.850806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.851140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.179 [2024-05-13 20:47:35.851149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.179 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.851490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.851847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.851856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.852183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.852547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.852556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.852911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.853291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.853300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.853721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.854049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.854059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.854411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.854778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.854789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.855165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.855512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.855521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.855896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.856240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.856250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.856540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.856909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.856918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.857240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.857582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.857591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.857909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.858285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.858293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.858636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.859003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.859012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.859365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.859679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.859688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.860117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.860492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.860501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.860829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.861203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.861212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.861557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.861719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.861732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.862005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.862376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.862387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.862741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.863151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.863160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.863540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.863765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.863775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.864122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.864473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.864482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.864803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.865173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.865182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.865420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.865801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.865809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.866013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.866379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.866388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.866811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.867012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.867020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.867407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.867736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.867745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.868044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.868383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.868394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.868765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.869120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.869128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.869456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.869815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.869824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.870150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.870397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.870406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.870744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.871081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.871090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.180 qpair failed and we were unable to recover it. 00:34:20.180 [2024-05-13 20:47:35.871464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.180 [2024-05-13 20:47:35.871810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.871819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.872179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.872519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.872528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.872896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.873232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.873241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.873587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.873937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.873946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.874244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.874624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.874634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.874984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.875354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.875363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.875674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.876031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.876039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.876363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.876702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.876711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.877040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.877415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.877425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.877651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.878018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.878027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.878354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.878690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.878699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.879061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.879323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.879332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.879706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.880039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.880049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.880431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.880762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.880771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.881141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.881465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.881475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.881832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.882129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.882138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.882492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.882846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.882855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.883210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.883566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.883576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.883856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.884180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.884189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.884415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.884756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.884765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.885091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.885380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.885389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.885731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.886022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.886030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.886349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.886688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.886696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.887093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.887428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.887438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.887801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.888197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.888608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.888879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.888888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.889223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.889570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.889579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.889918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.890294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.890304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.890700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.891040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.891048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.181 [2024-05-13 20:47:35.891386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.891752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.181 [2024-05-13 20:47:35.891761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.181 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.892085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.892420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.892429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.892776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.893119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.893128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.893498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.893826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.893835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.894267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.894542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.894552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.894924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.895297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.895306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.895640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.896047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.896056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.896381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.896732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.896742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.897066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.897432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.897441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.897806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.898058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.898067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.898421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.898771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.898781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.899104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.899377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.899386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.899615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.899851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.899861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.900230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.900580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.900590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.900969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.901302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.901311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.901673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.902045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.902054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.902404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.902740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.902749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.903128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.903498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.903508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.903924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.904252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.904262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.904564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.904941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.904950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.905302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.905658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.905667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.905872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.906173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.906181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.906424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.906788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.906797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.907129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.907333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.907343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.907721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.908060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.908069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.908324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.908683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.908692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.909016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.909395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.909405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.909795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.910145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.910153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.910486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.910854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.910863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.911190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.911524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.911534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.182 qpair failed and we were unable to recover it. 00:34:20.182 [2024-05-13 20:47:35.911881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.182 [2024-05-13 20:47:35.912212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.912221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.912446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.912825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.912834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.913162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.913499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.913509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.913836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.914184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.914194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.914544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.914875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.914883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.915252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.915560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.915569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.915926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.916241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.916250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.916571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.916861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.916869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.917189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.917453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.917462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.917838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.918226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.918235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.918550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.918923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.918932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.919258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.919605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.919614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.919942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.920320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.920329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.920679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.921016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.921024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.921357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.921696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.921704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.922031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.922401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.922410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.922778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.923111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.923120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.183 qpair failed and we were unable to recover it. 00:34:20.183 [2024-05-13 20:47:35.923527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.183 [2024-05-13 20:47:35.923851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.923860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.924194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.924569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.924578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.924903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.925197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.925206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.925587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.925959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.925969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.926167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.926497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.926506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.926874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.927094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.927104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.927478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.927912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.927921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.928255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.928587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.928597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.928972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.929308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.929322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.929730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.930073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.930083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.930450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.930812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.930821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.931180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.931524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.931533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.931862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.932240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.932249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.932536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.932927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.932936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.933263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.933488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.933497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.933901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.934232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.934240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.934581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.934763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.934773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.935096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.935396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.935405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.935755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.936088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.936097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.936464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.936817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.936826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.937163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.937502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.937512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.937809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.938168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.938177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.938500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.938842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.938851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.939169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.939375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.939384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.939729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.940053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.940062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.940242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.940647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.940656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.940982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.941330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.941339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.941552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.941800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.941809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.942129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.942504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.942513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.942867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.943165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.184 [2024-05-13 20:47:35.943173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.184 qpair failed and we were unable to recover it. 00:34:20.184 [2024-05-13 20:47:35.943393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.943734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.943744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.944102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.944485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.944494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.944866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.945200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.945210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.945588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.945973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.945983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.946182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.946522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.946531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.946888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.947260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.947268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.947610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.947919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.947928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.948290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.948667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.948677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.949008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.949372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.949382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.949750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.950100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.950109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.950341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.950706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.950718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.951088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.951438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.951448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.951798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.952128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.952136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.952488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.952721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.952730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.953066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.953272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.953282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.953627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.953994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.954004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.954256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.954462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.954473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.954714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.955063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.955073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.955407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.955752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.955761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.956115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.956487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.956497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.956832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.957194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.957205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.957535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.957869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.957878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.958210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.958552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.958562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.958909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.959240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.959250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.959596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.959980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.959989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.960310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.960660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.960669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.961042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.961375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.961385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.961697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.962067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.962075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.962402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.962741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.962750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.185 [2024-05-13 20:47:35.963120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.963504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.185 [2024-05-13 20:47:35.963514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.185 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.963845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.964188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.964199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.964567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.964900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.964908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.965243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.965451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.965462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.965822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.966153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.966161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.966485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.966842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.966851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.967180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.967443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.967453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.967815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.968184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.968193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.968569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.968951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.968960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.969306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.969678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.969687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.969985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.970351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.970360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.970659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.974325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.974348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.974734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.975085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.975096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.975429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.975683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.975694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.975921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.976273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.976290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.976531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.977196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.977214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.977563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.977941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.977952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.978291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.978641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.978651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.978867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.979212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.979221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.979571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.979943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.979953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.980299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.980669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.980678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.981013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.981398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.981407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.981635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.981880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.981888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.982286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.982669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.982678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.983127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.983339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.983349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.983746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.984079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.984089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.984382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.984593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.984602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.984919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.985298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.985307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.985636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.985980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.985990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.986330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.986666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.986675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.186 qpair failed and we were unable to recover it. 00:34:20.186 [2024-05-13 20:47:35.987025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.186 [2024-05-13 20:47:35.987369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.987380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.987721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.987964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.987972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.988333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.988688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.988697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.988952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.989243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.989252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.989584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.989842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.989851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.990184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.990500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.990510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.990850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.991224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.991232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.991577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.991928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.991937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.992287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.992623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.992633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.992975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.993351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.993361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.993712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.993893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.993903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.994287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.994497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.994507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.994871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.995202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.995210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.995570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.995775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.995784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.996142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.996517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.996527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.996853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.997052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.997062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.997387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.997729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.997738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.998096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.998377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.998386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.998647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.998884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.998899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:35.999271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.999604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:35.999614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:36.000000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.000309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.000325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:36.000669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.000995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.001004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:36.001327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.001571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.001580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:36.001965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.002127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.002137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:36.002364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.002709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.187 [2024-05-13 20:47:36.002718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.187 qpair failed and we were unable to recover it. 00:34:20.187 [2024-05-13 20:47:36.003056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.003432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.003441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.003675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.004044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.004053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.004383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.004692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.004701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.005103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.005439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.005448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.005784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.006154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.006163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.006561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.006908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.006917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.007238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.007571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.007580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.007901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.008270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.008279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.008613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.008967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.008976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.009327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.009664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.009672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.010009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.010368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.010377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.010723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.011055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.011064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.011406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.011767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.011776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.011979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.012354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.012363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.012697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.013058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.013067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.013520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.013900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.013909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.014294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.014590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.014600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.014967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.015164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.015173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.015518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.015860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.015869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.016196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.016486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.016495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.016863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.017233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.017242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.017595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.018000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.018009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.018365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.018699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.018709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.019019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.019374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.019384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.019782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.020116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.020124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.020448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.020787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.020797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.021121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.021484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.021494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.021919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.022207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.022216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.022561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.022897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.022906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.188 qpair failed and we were unable to recover it. 00:34:20.188 [2024-05-13 20:47:36.023233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.023463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.188 [2024-05-13 20:47:36.023473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.023844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.024176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.024184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.024608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.024989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.024998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.025324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.025663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.025673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3302265 Killed "${NVMF_APP[@]}" "$@" 00:34:20.189 [2024-05-13 20:47:36.026486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.026828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.026839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:34:20.189 [2024-05-13 20:47:36.027205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:20.189 [2024-05-13 20:47:36.027524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.027535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.189 [2024-05-13 20:47:36.027903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:20.189 [2024-05-13 20:47:36.028235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.189 [2024-05-13 20:47:36.028246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.028630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.028818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.028828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.029147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.029471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.029480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.029824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.030157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.030167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.030520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.030773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.030783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.031080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.031328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.031344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.031682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.031877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.031886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.032147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.032490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.032500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.032847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.033173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.033182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.033526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.033821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.033830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.034189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.034553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.034562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.034939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.035127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.035138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3303137 00:34:20.189 [2024-05-13 20:47:36.035397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.035533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.035542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3303137 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:20.189 [2024-05-13 20:47:36.035950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3303137 ']' 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.189 [2024-05-13 20:47:36.036336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.036347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.189 [2024-05-13 20:47:36.036557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.189 [2024-05-13 20:47:36.036935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.036945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.189 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.189 [2024-05-13 20:47:36.037305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.037568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.037579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.037952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.038324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.038334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.038558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.038860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.038870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.039219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.039561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.189 [2024-05-13 20:47:36.039571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.189 qpair failed and we were unable to recover it. 00:34:20.189 [2024-05-13 20:47:36.039945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.040181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.040191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.040541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.040839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.040849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.041229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.041569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.041579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.041822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.042036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.042046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.042460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.042790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.042800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.043038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.043270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.043280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.043534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.043829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.043839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.044189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.044504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.044515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.044898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.045236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.045245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.045625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.046012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.046022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.046390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.046708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.046718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.046936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.047273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.047282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.047486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.047825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.047835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.048096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.048443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.048453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.048708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.048917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.048927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.049287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.049601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.049611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.049975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.050289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.050298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.050445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.050783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.050793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.051165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.051474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.051484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.051805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.052139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.052148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.052528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.052758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.052767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.053108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.053456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.053465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.053805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.054118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.054127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.054427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.054779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.054789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.055158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.055428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.055437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.055773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.055977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.055986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.056210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.056402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.056411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.056823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.057188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.057196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.057586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.057901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.190 [2024-05-13 20:47:36.057912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.190 qpair failed and we were unable to recover it. 00:34:20.190 [2024-05-13 20:47:36.058137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.058445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.058455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.058670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.059016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.059025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.059390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.059642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.059652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.060016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.060342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.060352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.060649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.061000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.061010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.061381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.061628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.061637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.061992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.062324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.062334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.062759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.063113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.063123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.063463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.063691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.063701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.064024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.064367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.064377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.064730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.065111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.065121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.065460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.065823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.065832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.066205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.066523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.066532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.066750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.067116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.067125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.067515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.067860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.067869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.068068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.068421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.068431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.068635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.068958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.068967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.069324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.069560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.069569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.069900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.070075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.070083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.070402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.070611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.070620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.070883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.071098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.071108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.071492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.071893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.071901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.072303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.072610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.072620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.073010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.073333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.073342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.073599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.073957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.073965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.074296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.074672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.074682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.075035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.075285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.075294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.075385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.075750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.075759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.076087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.076433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.076442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.191 [2024-05-13 20:47:36.076807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.077027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.191 [2024-05-13 20:47:36.077036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.191 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.077257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.077506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.077515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.077901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.078249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.078258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.078363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.078559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.078568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.078795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.079160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.079170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.079531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.079893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.079902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.080115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.080460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.080469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.080794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.080998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.081007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.081225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.081472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.081481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.081583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.081791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.081800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.082142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.082359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.082371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.082658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.083001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.083010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.083399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.083717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.083725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.084123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.084281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.084290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.084649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.085022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.085031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.085299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.085655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.085665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.086001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.086367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.086376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.086740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.087116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.087125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.087477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.087824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.087833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.088244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.088305] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:34:20.192 [2024-05-13 20:47:36.088357] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.192 [2024-05-13 20:47:36.088566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.088576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.088887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.089273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.089283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.089556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.089888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.089898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.090126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.090443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.090454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.090834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.091177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.091187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.091534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.091906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.192 [2024-05-13 20:47:36.091916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.192 qpair failed and we were unable to recover it. 00:34:20.192 [2024-05-13 20:47:36.092277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.092626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.092636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.093012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.093357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.093367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.093722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.094066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.094075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.094297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.094649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.094658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.095016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.095386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.095396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.095738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.095967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.095977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.096344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.096658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.096668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.097049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.097346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.097356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.097705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.098002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.098011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.098400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.098615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.098625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.098811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.099195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.099205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.099547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.099916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.099926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.100268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.100433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.100443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.100779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.100963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.100973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.101174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.101512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.101521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.101862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.102097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.102106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.102417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.102683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.102692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.102876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.103242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.103251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.103595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.103771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.103780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.104100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.104296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.104305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.104531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.104869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.104879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.105270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.105611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.105621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.105970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.106143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.106153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.106355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.106693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.106704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.106942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.107301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.107310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.107697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.108041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.108051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.108349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.108698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.108708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.108915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.109135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.109143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.109498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.109785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.109794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.193 qpair failed and we were unable to recover it. 00:34:20.193 [2024-05-13 20:47:36.110161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.110388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.193 [2024-05-13 20:47:36.110397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.194 qpair failed and we were unable to recover it. 00:34:20.194 [2024-05-13 20:47:36.110737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.111113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.111122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.194 qpair failed and we were unable to recover it. 00:34:20.194 [2024-05-13 20:47:36.111532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.111837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.111846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.194 qpair failed and we were unable to recover it. 00:34:20.194 [2024-05-13 20:47:36.112177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.112520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.112529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.194 qpair failed and we were unable to recover it. 00:34:20.194 [2024-05-13 20:47:36.112673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.112846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.112855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.194 qpair failed and we were unable to recover it. 00:34:20.194 [2024-05-13 20:47:36.113050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.113421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.194 [2024-05-13 20:47:36.113431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.194 qpair failed and we were unable to recover it. 00:34:20.194 [2024-05-13 20:47:36.113761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.114094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.114105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.114205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.114413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.114423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.114824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.115165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.115175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.115568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.115927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.115936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.116273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.116641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.116651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.117031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.117211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.117220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.117623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.117866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.117874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.118217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.118588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.118597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.118937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.119194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.119204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.119398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.119842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.119852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.120174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.120392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.120401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.120611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.120908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.120916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.121248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.121548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.121558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.121876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.122213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.122222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.464 [2024-05-13 20:47:36.122567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.122905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.464 [2024-05-13 20:47:36.122913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.464 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.123234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.465 [2024-05-13 20:47:36.123589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.123598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.123902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.124247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.124256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.124447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.124801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.124810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.125017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.125245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.125254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.125628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.125966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.125975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.126375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.126648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.126658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.127071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.127411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.127421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.127784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.128119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.128128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.128494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.128814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.128824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.129038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.129368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.129378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.129735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.129792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.129801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.130171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.130578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.130588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.130932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.131317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.131326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.131689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.132010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.132018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.132351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.132694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.132703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.132955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.133298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.133307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.133655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.133841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.133849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.134196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.134489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.134499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.134865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.135198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.135207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.135449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.135812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.135822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.136215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.136585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.136594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.136904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.137158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.137167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.137355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.137748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.137757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.137971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.138172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.138180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.138396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.138780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.138789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.139172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.139457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.139466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.139836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.140182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.140191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.140544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.140878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.140887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.141146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.141559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.141568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.465 qpair failed and we were unable to recover it. 00:34:20.465 [2024-05-13 20:47:36.141903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.465 [2024-05-13 20:47:36.142283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.142292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.142650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.143019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.143027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.143364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.143718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.143726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.144050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.144344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.144354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.144676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.145008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.145016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.145324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.145683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.145692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.146022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.146332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.146341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.146651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.146986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.146995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.147324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.147645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.147653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.147983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.148327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.148336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.148690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.149037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.149046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.149416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.149762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.149770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.149977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.150248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.150257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.150512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.150848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.150857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.151186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.151371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.151381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.151539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.151876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.151885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.152125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.152294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.152303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.152646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.152986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.152995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.153343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.153696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.153705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.154043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.154407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.154417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.154754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.155096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.155104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.155449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.155786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.155795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.155949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.156334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.156344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.156562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.156935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.156944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.157256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.157481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.157490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.157811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.158124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.158132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.158476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.158785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.158794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.159114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.159451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.159460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.159663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.160007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.160015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.466 qpair failed and we were unable to recover it. 00:34:20.466 [2024-05-13 20:47:36.160203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.466 [2024-05-13 20:47:36.160507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.160516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.160850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.161110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.161119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.161312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.161661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.161671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.161991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.162322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.162331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.162639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.162966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.162974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.163306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.163502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.163511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.163870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.164208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.164217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.164497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.164877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.164886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.165074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.165417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.165426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.165767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.166105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.166115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.166335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.166714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.166724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.167059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.167422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.167432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.167784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.168046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.168055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.168423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.168790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.168799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.169119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.169457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.169466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.169810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.170101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.170110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.170478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.170789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.170799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.171157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.171366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.171375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.171708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.172056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.172064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.172292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.172467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.172477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.172834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.173174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.173182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.173599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.173913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.173928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.174093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.174397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.174406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.174758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.175092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.175101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.175418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.175751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.175760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.175988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.176294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.176303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.176696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.177039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.177048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.177461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.177804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.177816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.178038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.178393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.178402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.467 [2024-05-13 20:47:36.178728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.178937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.467 [2024-05-13 20:47:36.178945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.467 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.179271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.179476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.468 [2024-05-13 20:47:36.179571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.179580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.179926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.180261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.180270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.180605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.180972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.180981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.181170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.181489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.181499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.181944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.182273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.182282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.182651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.182995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.183004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.183334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.183696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.183706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.184081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.184429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.184439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.184648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.184838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.184848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.185204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.185522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.185533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.185845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.186140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.186150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.186500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.186841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.186852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.187053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.187244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.187253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.187517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.187887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.187896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.188329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.188719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.188728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.188934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.189296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.189305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.189657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.190031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.190040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.190266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.190478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.190488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.190821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.191169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.191178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.191541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.191719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.191729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.192039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.192427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.192437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.192620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.192985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.192994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.193324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.193656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.193665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.194047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.194394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.468 [2024-05-13 20:47:36.194403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.468 qpair failed and we were unable to recover it. 00:34:20.468 [2024-05-13 20:47:36.194766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.195153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.195162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.195363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.195548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.195558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.195925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.196125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.196134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.196506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.196872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.196881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.197219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.197583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.197593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.197932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.198303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.198312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.198503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.198851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.198860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.199193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.199422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.199433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.199793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.200063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.200072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.200403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.200631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.200639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.201009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.201167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.201177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.201549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.201895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.201904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.202207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.202437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.202446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.202804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.203145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.203157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.203494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.203802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.203811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.204139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.204473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.204483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.204836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.205182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.205191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.205476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.205800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.205809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.206137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.206477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.206488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.206821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.207045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.207055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.207396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.207743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.207751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.208094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.208405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.208426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.208790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.209098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.209107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.209509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.209891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.209905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.210096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.210415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.210425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.210640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.211025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.211033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.211373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.211736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.211744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.212078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.212269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.212278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.469 qpair failed and we were unable to recover it. 00:34:20.469 [2024-05-13 20:47:36.212589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.469 [2024-05-13 20:47:36.212914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.212924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.213262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.213595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.213606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.213930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.214290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.214299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.214647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.214990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.215001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.215345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.215571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.215581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.215906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.216143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.216155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.216505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.216842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.216851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.217216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.217563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.217574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.217914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.218249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.218258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.218571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.218932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.218940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.219199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.219387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.219397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.219605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.219903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.219912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.220225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.220571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.220580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.220927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.221232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.221241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.221597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.221929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.221938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.222229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.222571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.222580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.222908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.223196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.223205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.223547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.223882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.223892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.224273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.224630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.224640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.225004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.225200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.225210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.225554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.225772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.225781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.226026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.226383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.226393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.226727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.227116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.227126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.227477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.227758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.227768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.228093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.228424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.228433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.228732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.229089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.229098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.229419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.229759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.229768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.229969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.230337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.230346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.230694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.231023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.231032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.231399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.231734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.470 [2024-05-13 20:47:36.231743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.470 qpair failed and we were unable to recover it. 00:34:20.470 [2024-05-13 20:47:36.232051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.232331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.232340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.232746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.233112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.233120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.233457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.233801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.233810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.234138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.234349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.234357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.234766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.235107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.235116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.235328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.235574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.235584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.235908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.236264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.236273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.236478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.236814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.236824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.237217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.237555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.237564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.237727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.238072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.238081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.238410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.238764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.238773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.239137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.239488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.239498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.239848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.240188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.240197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.240605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.240944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.240952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.241297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.241657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.241667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.241856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.242181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.242190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.242557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.242848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.242856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.243220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.243576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.243585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.243918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.244283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.244291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.244541] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.471 [2024-05-13 20:47:36.244569] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.471 [2024-05-13 20:47:36.244576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.471 [2024-05-13 20:47:36.244583] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.471 [2024-05-13 20:47:36.244589] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.471 [2024-05-13 20:47:36.244705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.244748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:20.471 [2024-05-13 20:47:36.244930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.244940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.244890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:20.471 [2024-05-13 20:47:36.245015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:20.471 [2024-05-13 20:47:36.245015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:20.471 [2024-05-13 20:47:36.245350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.245519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.245528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.245880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.246130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.246139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.246475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.246816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.246824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.247020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.247399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.247411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.247749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.247958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.247967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.248270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.248490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.248501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.248861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.249253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.249263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.471 qpair failed and we were unable to recover it. 00:34:20.471 [2024-05-13 20:47:36.249671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.471 [2024-05-13 20:47:36.250049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.250059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.250410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.250651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.250661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.250988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.251325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.251334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.251677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.251919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.251928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.252279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.252655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.252664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.253038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.253264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.253273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.253648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.253994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.254005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.254354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.254718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.254727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.255097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.255506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.255516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.255902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.256254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.256263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.256550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.256833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.256841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.257207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.257557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.257567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.257904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.258024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.258033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.258388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.258747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.258756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.259110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.259439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.259448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.259790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.260150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.260159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.260491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.260733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.260744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.261089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.261435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.261445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.261782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.262148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.262158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.262408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.262756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.262765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.263094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.263463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.263473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.263698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.264071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.264081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.264435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.264799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.264809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.265140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.265238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.265247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.265582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.265812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.265821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.266158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.266495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.266506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.266875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.267251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.267260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.267352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.267684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.267694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.268036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.268398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.268407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.268770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.269090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.472 [2024-05-13 20:47:36.269099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.472 qpair failed and we were unable to recover it. 00:34:20.472 [2024-05-13 20:47:36.269431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.269815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.269824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.270216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.270467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.270477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.270696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.271069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.271078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.271273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.271483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.271492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.271876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.272257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.272265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.272683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.273033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.273042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.273442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.273824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.273834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.274191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.274523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.274532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.274739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.275100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.275110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.275467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.275852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.275860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.276138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.276522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.276531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.276755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.277103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.277112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.277445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.277798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.277808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.278143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.278225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.278234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.278592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.278778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.278788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.279032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.279406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.279416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.279751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.280110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.280119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.280291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.280656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.280666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.280997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.281205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.281214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.281628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.281937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.281947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.282242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.282605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.282614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.282866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.283094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.283103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.283168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.283516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.283527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.283896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.284251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.284260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.473 [2024-05-13 20:47:36.284606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.284836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.473 [2024-05-13 20:47:36.284845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.473 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.285202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.285522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.285532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.285892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.286199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.286208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.286608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.286946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.286955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.287302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.287660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.287670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.287985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.288323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.288333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.288674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.288857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.288865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.289081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.289484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.289493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.289698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.289899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.289908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.290207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.290448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.290458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.290702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.290924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.290933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.291138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.291468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.291477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.291897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.292079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.292088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.292449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.292767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.292777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.293116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.293474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.293484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.293682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.294053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.294061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.294402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.294763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.294771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.294978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.295286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.295295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.295655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.296041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.296049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.296288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.296667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.296677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.296891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.297272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.297281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.297639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.297846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.297855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.298112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.298549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.298558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.298893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.299169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.299178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.299465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.299707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.299716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.300081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.300340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.300350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.300574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.300933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.300941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.301283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.301526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.301536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.301870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.302225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.302234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.302576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.302912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.474 [2024-05-13 20:47:36.302921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.474 qpair failed and we were unable to recover it. 00:34:20.474 [2024-05-13 20:47:36.303213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.303446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.303456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.303833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.304059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.304068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.304426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.304808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.304817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.305081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.305299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.305308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.305684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.306071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.306081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.306349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.306635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.306644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.307009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.307378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.307388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.307763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.308148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.308156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.308498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.308727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.308736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.309133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.309475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.309485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.309822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.310193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.310201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.310613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.310958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.310967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.311316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.311657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.311667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.312020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.312364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.312386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.312829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.313168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.313178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.313512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.313744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.313753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.314096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.314304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.314318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.314503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.314825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.314833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.315239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.315583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.315594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.315832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.316174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.316183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.316602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.316933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.316941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.317203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.317411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.317420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.317596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.317970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.317980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.318351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.318714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.318724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.319109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.319469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.319478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.319533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.319928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.319937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.320294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.320634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.320643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.320976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.321354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.321363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.321724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.321928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.321937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.475 qpair failed and we were unable to recover it. 00:34:20.475 [2024-05-13 20:47:36.322195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.475 [2024-05-13 20:47:36.322384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.322394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.322611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.322966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.322976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.323364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.323554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.323563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.323931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.324322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.324331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.324513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.324745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.324755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.325108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.325323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.325332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.325699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.326041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.326050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.326415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.326771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.326780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.326988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.327316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.327325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.327676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.327905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.327914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.328165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.328520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.328530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.328911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.329246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.329255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.329502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.329773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.329781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.330114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.330347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.330357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.330739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.331078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.331087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.331472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.331793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.331802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.331871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.332106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.332114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.332357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.332694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.332703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.333035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.333236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.333245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.333446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.333760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.333769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.334101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.334291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.334301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.334653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.334878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.334887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.335236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.335581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.335590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.335658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.336003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.336012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.336368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.336575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.336584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.336642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.337041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.337050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.337108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.337459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.337469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.337822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.338185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.338194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.338441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.338821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.338830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.476 qpair failed and we were unable to recover it. 00:34:20.476 [2024-05-13 20:47:36.339158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.476 [2024-05-13 20:47:36.339516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.339525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.339708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.339933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.339943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.340281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.340621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.340630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.340818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.341130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.341138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.341482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.341833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.341842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.342052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.342436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.342447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.342887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.343240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.343249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.343640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.343973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.343982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.344359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.344708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.344717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.345091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.345435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.345444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.345787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.346148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.346157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.346544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.346904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.346914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.347136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.347470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.347479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.347719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.348072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.348080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.348380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.348785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.348794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.349125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.349477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.349489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.349831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.350069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.350077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.350464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.350763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.350773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.350963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.351271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.351281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.351473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.351827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.351837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.352178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.352546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.352555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.352753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.353099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.353107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.353399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.353737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.353746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.354113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.354478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.354487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.354843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.355219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.477 [2024-05-13 20:47:36.355228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.477 qpair failed and we were unable to recover it. 00:34:20.477 [2024-05-13 20:47:36.355590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.355961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.355973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.356205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.356429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.356439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.356816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.357023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.357032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.357389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.357615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.357624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.357843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.358198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.358207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.358565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.358902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.358912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.359267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.359636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.359646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.359976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.360179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.360190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.360443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.360501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.360511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.360834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.360976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.360986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.361223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.361562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.361574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.361760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.361999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.362007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.362409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.362793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.362803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.363011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.363401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.363410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.363657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.364039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.364048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.364392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.364764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.364773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.364978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.365208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.365223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.365472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.365947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.365956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.366151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.366524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.366534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.366867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.367081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.367089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.367271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.367460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.367469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.367784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.368160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.368169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.368360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.368676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.368685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.369057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.369438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.369448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.369790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.370113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.370130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.370462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.370822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.370831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.371145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.371379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.371389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.371663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.371985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.371994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.478 [2024-05-13 20:47:36.372330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.372521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.478 [2024-05-13 20:47:36.372530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.478 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.372929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.373274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.373283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.373636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.374018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.374028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.374263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.374649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.374658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.374994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.375398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.375407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.375654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.376044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.376054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.376412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.376724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.376733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.377098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.377466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.377475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.377846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.378048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.378057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.378369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.378728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.378737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.379110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.379194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.379202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.379363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.379668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.379677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.380067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.380452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.380462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.380795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.381172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.381181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.381512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.381857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.381866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.382050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.382401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.382410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.382467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.382830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.382840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.383211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.383577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.383586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.383936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.384307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.384322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.384529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.384766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.384777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.385130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.385472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.385482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.385833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.386184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.386192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.386366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.386597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.386606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.386982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.387184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.387194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.387382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.387724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.387733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.388084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.388301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.388310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.388644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.389007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.389016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.479 qpair failed and we were unable to recover it. 00:34:20.479 [2024-05-13 20:47:36.389361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.389568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.479 [2024-05-13 20:47:36.389577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.389915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.390301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.390310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.390738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.391053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.391062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.391421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.391808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.391818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.392151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.392517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.392527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.392871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.393133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.393143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.393518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.393721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.393730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.393925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.394144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.394154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.394529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.394901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.394910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.395238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.395619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.395628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.395982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.396077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.396088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.396516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.396859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.396868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.397201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.397540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.397550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.397857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.398236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.398244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.480 [2024-05-13 20:47:36.398508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.398762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.480 [2024-05-13 20:47:36.398772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.480 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.399124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.399339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.399350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.399720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.400019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.400029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.400417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.400762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.400771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.401149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.402083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.402106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.402333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.402697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.402706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.403071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.403405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.403416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.403832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.404166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.404175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.404514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.404721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.404730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.405092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.405295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.405304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.405664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.405893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.405902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.406231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.406458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.406467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.406828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.407206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.407214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.407566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.407768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.407777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.750 [2024-05-13 20:47:36.408116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.408465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.750 [2024-05-13 20:47:36.408474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.750 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.408822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.409199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.409207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.409450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.409820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.409829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.410158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.410476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.410486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.410823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.411159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.411168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.411542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.411877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.411886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.411941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.412278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.412287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.412544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.412715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.412723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.412936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.413262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.413270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.413647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.414013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.414021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.414376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.414735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.414744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.414934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.414993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.415002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.415212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.415523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.415532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.415860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.416228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.416237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.416606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.416992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.417001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.417245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.417395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.417405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.417604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.417826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.417835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.418210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.418563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.418572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.418900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.419275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.419284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.419521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.419874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.419883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.751 qpair failed and we were unable to recover it. 00:34:20.751 [2024-05-13 20:47:36.420213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.420553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.751 [2024-05-13 20:47:36.420563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.420779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.421142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.421150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.421334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.421529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.421538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.421744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.422086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.422095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.422360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.422595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.422603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.422809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.423144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.423153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.423573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.423778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.423786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.424151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.424495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.424504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.424702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.425058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.425067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.425477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.425774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.425782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.426165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.426469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.426479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.426852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.427228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.427236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.427657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.427976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.427985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.428354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.428700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.428709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.428897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.429272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.429281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.429511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.429728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.429737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.430083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.430311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.430332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.430678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.431024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.431032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.431359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.431700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.431709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.431893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.432119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.432129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.432334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.432509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.432518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.752 qpair failed and we were unable to recover it. 00:34:20.752 [2024-05-13 20:47:36.432920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.752 [2024-05-13 20:47:36.433278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.433287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.433344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.433692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.433702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.434079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.434417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.434427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.434633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.434940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.434948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.435327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.435668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.435677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.435882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.436223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.436232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.436577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.436977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.436986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.437333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.437604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.437615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.437995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.438365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.438374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.438759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.439094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.439103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.439360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.439718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.439726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.440091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.440418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.440428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.440777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.441139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.441147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.441500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.441726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.441736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.442094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.442444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.442454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.442635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.442914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.442923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.443279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.443652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.443661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.444036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.444377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.444388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.444830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.445191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.445200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.445392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.445716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.445725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.446023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.446392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.446401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.446785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.447168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.447177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.447386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.447632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.447640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.447839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.448215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.448224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.448418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.448662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.448671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.449019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.449376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.449386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.753 qpair failed and we were unable to recover it. 00:34:20.753 [2024-05-13 20:47:36.449755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.753 [2024-05-13 20:47:36.450137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.450146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.450520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.450892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.450902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.451250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.451627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.451636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.451964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.452337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.452347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.452712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.453095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.453104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.453322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.453504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.453515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.453848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.454231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.454240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.454524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.454865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.454874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.455205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.455677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.455687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.456027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.456386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.456396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.456764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.457106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.457115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.457460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.457827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.458196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.458400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.458409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.458790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.458940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.458949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.459371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.459693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.459702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.460076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.460411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.460421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.460772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.461119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.461127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.461471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.461735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.461744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.462135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.462482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.462492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.462872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.463226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.463234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.463576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.463626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.463635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.463987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.464177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.464186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.464543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.464889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.464897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.465325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.465530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.465540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.465943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.466368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.466377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.466739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.467092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.467101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.467289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.467632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.467641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.468017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.468404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.468413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.468676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.468889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.468898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.754 [2024-05-13 20:47:36.469240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.469429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.754 [2024-05-13 20:47:36.469438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.754 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.469730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.470049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.470057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.470388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.470768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.470777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.471106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.471301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.471311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.471659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.471724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.471732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.471927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.472277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.472285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.472648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.473031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.473039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.473371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.473639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.473648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.473833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.474139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.474148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.474404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.474722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.474731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.475060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.475441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.475450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.475623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.475881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.475889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.476251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.476610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.476620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.476952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.477288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.477297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.477518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.477863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.477873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.478223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.478564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.478573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.478904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.479278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.479287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.479520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.479844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.479853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.480043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.480248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.480258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.480624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.480961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.480969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.481111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.481439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.481449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.481803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.482141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.482150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.755 [2024-05-13 20:47:36.482528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.482904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.755 [2024-05-13 20:47:36.482912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.755 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.483144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.483519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.483528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.483916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.484086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.484095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.484307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.484665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.484674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.484834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.485086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.485095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.485327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.485658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.485667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.486000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.486356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.486366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.486577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.486840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.486848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.487202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.487607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.487616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.487941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.488122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.488131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.488450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.488798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.488807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.489202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.489527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.489537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.489987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.490340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.490350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.490720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.491062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.491072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.491274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.491493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.491503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.491734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.492107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.492117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.492499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.492705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.492713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.493117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.493454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.493463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.493784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.494167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.494175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.494391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.494755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.494764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.495072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.495446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.495456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.495827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.496027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.496035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.496451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.496657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.496666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.496852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.497054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.497063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.497269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.497587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.497597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.756 qpair failed and we were unable to recover it. 00:34:20.756 [2024-05-13 20:47:36.497949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.756 [2024-05-13 20:47:36.498320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.498329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.498773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.498979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.498988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.499349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.499704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.499713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.500100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.500261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.500269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.500621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.500830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.500840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.501237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.501587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.501597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.501928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.502079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.502087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.502414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.502641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.502650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.502857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.503073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.503082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.503458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.503544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.503554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.503903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.504240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.504249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.504452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.504646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.504656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.504866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.505204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.505214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.505429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.505817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.505826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.506028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.506244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.506253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.506617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.506860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.506868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.507220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.507393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.507402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.507751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.507961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.507970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.508285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.508661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.508670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.509001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.509369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.509378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.509608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.509812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.509820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.510184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.510610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.757 [2024-05-13 20:47:36.510619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.757 qpair failed and we were unable to recover it. 00:34:20.757 [2024-05-13 20:47:36.510949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.511320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.511329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.511508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.511948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.511957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.512335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.512629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.512638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.512966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.513303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.513312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.513644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.514031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.514040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.514258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.514462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.514472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.514726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.515107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.515117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.515470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.515660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.515669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.516028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.516239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.516248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.516440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.516664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.516674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.517030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.517233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.517241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.517639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.518014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.518023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.518398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.518747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.518756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.519084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.519274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.519282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.519489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.519825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.519833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.520165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.520505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.520515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.520826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.521047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.521056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.521443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.521667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.521675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.522037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.522380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.522390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.522603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.522925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.522933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.523307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.523665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.523674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.758 qpair failed and we were unable to recover it. 00:34:20.758 [2024-05-13 20:47:36.524009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.524363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.758 [2024-05-13 20:47:36.524373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.524562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.524840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.524848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.525206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.525435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.525444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.525805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.526035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.526044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.526373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.526720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.526729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.527062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.527427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.527437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.527779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.528114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.528123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.528487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.528824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.528832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.529038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.529422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.529431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.529789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.530013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.530021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.530238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.530432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.530442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.530815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.531031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.531040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.531367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.531683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.531692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.532048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.532388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.532398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.532704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.533057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.533066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.533269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.533496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.533505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.533844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.534187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.534196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.534401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.534652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.534661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.534844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.535195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.535203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.535611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.535960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.535968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.536405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.536616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.536625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.536862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.537056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.537064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.759 qpair failed and we were unable to recover it. 00:34:20.759 [2024-05-13 20:47:36.537300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.537607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.759 [2024-05-13 20:47:36.537616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.537958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.538124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.538135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.538361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.538526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.538534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.538717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.539073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.539082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.539260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.539501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.539510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.539891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.540186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.540194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.540524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.540866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.540874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.541202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.541549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.541558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.541905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.542317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.542327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.542675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.542884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.542893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.543122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.543443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.543452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.543797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.544020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.544031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.544452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.544796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.544805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.544863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.545246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.545256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.545599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.545935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.545944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.546293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.546521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.546530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.546586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.546877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.546886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.547245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.547451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.547460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.547815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.547864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.547872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.548293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.548486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.548496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.548757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.549023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.549032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.549414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.549773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.549784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.760 qpair failed and we were unable to recover it. 00:34:20.760 [2024-05-13 20:47:36.549978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.760 [2024-05-13 20:47:36.550309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.550322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.550506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.550925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.550934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.551274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.551481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.551490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.551904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.552114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.552123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.552424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.552785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.552795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.553017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.553353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.553362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.553708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.554069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.554077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.554458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.554839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.554848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.555217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.555618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.555627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.555970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.556184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.556196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.556393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.556665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.556674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.556874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.557258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.557267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.557614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.557943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.557952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.558299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.558644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.558653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.558845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.559219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.559228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.559587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.559761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.559769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.560113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.560552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.560562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.560899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.561248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.561257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.761 qpair failed and we were unable to recover it. 00:34:20.761 [2024-05-13 20:47:36.561614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.561960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.761 [2024-05-13 20:47:36.561968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.562032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.562369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.562379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.562793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.563183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.563192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.563566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.563908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.563917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.564351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.564726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.564734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.565080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.565305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.565323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.565645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.565908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.565917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.566248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.566527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.566536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.566774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.567002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.567011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.567234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.567603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.567612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.567952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.568335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.568344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.568613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.568849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.568857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.569060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.569429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.569438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.569773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.570110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.570119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.570318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.570677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.570686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.570891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.571183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.571191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.571527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.571754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.571763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.571993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.572256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.572265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.572653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.573012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.573021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.573380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.573608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.573617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.574008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.574358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.574368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.574800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.575143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.575151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.575536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.575890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.575898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.762 qpair failed and we were unable to recover it. 00:34:20.762 [2024-05-13 20:47:36.576233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.762 [2024-05-13 20:47:36.576565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.576574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.576907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.577307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.577327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.577682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.577896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.577904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.578069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.578452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.578461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.578845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.579079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.579088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.579411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.579628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.579637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.580011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.580301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.580310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.580652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.580875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.580884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.581327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.581684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.581693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.581908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.582098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.582109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.582439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.582771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.582780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.582985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.583391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.583400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.583589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.583967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.583976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.584308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.584670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.584678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.585057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.585260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.585269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.585440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.585761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.585769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.586107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.586332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.586341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.586675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.586880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.586889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.587235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.587522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.587532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.587877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.588069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.588078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.588444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.588804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.588812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.589160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.589515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.589524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.589895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.590213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.590229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.590597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.590768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.591110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.591459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.591469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.591831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.592166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.592175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.592371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.592698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.592707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.593082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.593419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.593429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.763 qpair failed and we were unable to recover it. 00:34:20.763 [2024-05-13 20:47:36.593636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.763 [2024-05-13 20:47:36.594002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.594011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.594390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.594621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.594630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.595010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.595224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.595233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.595431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.595659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.595668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.595969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.596342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.596351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.596723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.597099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.597107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.597510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.597560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.597568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.597962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.598348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.598358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.598697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.598911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.598920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.599266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.599629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.599638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.599966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.600123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.600132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.600322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.600698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.600706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.601034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.601410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.601419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.601751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.601936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.601945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.602295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.602635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.602644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.602958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.603327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.603336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.603683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.604033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.604041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.604371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.604768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.604776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.604984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.605405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.605414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.605762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.606103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.606112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.606443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.606863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.606872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.764 qpair failed and we were unable to recover it. 00:34:20.764 [2024-05-13 20:47:36.607217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.607441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.764 [2024-05-13 20:47:36.607451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.607814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.608161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.608170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.608499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.608866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.608874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.609203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.609576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.609585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.609797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.610130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.610138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.610348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.610662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.610671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.610876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.611217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.611226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.611561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.611853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.611862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.612201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.612536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.612546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.612876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.613209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.613218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.613623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.613824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.613833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.614186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.614576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.614585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.614918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.615148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.615156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.615392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.615717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.615725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.615909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.616228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.616236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.616652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.616832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.616840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.617056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.617420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.617430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.617767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.617849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.617858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.618039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.618405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.618414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.618775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.619140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.619149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.619527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.619724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.619734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.619928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.620134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.620143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.765 qpair failed and we were unable to recover it. 00:34:20.765 [2024-05-13 20:47:36.620502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.765 [2024-05-13 20:47:36.620888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.620897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.621290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.621605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.621614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.621943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.622278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.622288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.622506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.622846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.622856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.623073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.623398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.623407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.623743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.624105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.624113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.624305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.624481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.624489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.624841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.625221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.625229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.625478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.625590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.625598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.625944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.626279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.626287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.626708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.627063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.627071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.627433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.627794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.627803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.628169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.628510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.628520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.628876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.629257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.629267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.629417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.629721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.629730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.629962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.630325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.630335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.630686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.630984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.630992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.631342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.631688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.631697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.631911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.632142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.632151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.632383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.632724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.632733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.632925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.633128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.633137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.633449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.633824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.633832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.634163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.634529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.634538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.634911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.635286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.635296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.766 [2024-05-13 20:47:36.635674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.636009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.766 [2024-05-13 20:47:36.636018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.766 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.636214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.636543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.636553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.636890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.637257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.637265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.637673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.638069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.638078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.638336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.638663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.638674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.638858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.639173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.639181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.639508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.639733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.639742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.640120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.640276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.640284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.640587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.640954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.640962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.641293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.641648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.641658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.642023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.642265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.642275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.642619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.642960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.642968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.643369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.643729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.643737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.644165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.644522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.644532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.644899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.645110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.645121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.645322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.645679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.645687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.767 qpair failed and we were unable to recover it. 00:34:20.767 [2024-05-13 20:47:36.646014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.646225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.767 [2024-05-13 20:47:36.646233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.646653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.646992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.647001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.647341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.647688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.647697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.648034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.648242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.648250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.648538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.648752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.648761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.648959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.649286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.649295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.649666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.650004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.650014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.650343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.650692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.650701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.651030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.651403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.651415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.651608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.651957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.651966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.652278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.652641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.652650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.653085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.653421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.653430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.653800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.654159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.654168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.654360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.654708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.654717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.655009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.655072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.655081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.655308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.655698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.655707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.655919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.656266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.656275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.656342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.656657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.656665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.656726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.657071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.657079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.657296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.657514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.657523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.657893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.658279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.658288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.658641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.659053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.659063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.768 [2024-05-13 20:47:36.659282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.659637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.768 [2024-05-13 20:47:36.659647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.768 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.659801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.660140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.660149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.660211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.660532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.660542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.660745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.660802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.660811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.661188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.661554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.661564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.661898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.662266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.662274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.662331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.662663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.662672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.663050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.663390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.663399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.663777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.664152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.664161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.664495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.664722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.664730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.665058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.665429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.665438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.665782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.666156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.666165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.666496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.666838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.666847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.667185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.667525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.667534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.667867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.668025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.668033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.668400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.668608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.668616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.668947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.669317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.669326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.669615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.669832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.669841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.670025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.670331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.670341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.670754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.671094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.671102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.671412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.671741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.671750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.672127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.672460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.672470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.672718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.673085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.673094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.673462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.673635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.673644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.673893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.674087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.769 [2024-05-13 20:47:36.674097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.769 qpair failed and we were unable to recover it. 00:34:20.769 [2024-05-13 20:47:36.674293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.674618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.674627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.675009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.675406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.675415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.675790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.676179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.676189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.676321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.676654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.676663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.677008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.677369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.677378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.677768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.678108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.678117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.678473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.678705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.678714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.679056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.679395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.679405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.679737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.680147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.680155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.680359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.680710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.680719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.680901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.681070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.681080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.681432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.681781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.681790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.681987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.682210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.682219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.682552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.682776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.682784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.683141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.683497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.683506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.683880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.684109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.770 [2024-05-13 20:47:36.684118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:20.770 qpair failed and we were unable to recover it. 00:34:20.770 [2024-05-13 20:47:36.684508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.684894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.684904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.685111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.685496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.685506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.685838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.686211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.686220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.686575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.686791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.686800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.687176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.687450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.687461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.687657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.687848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.687857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.688210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.688563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.688572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.688796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.689192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.689201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.689382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.689741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.689750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.689941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.690279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.690288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-05-13 20:47:36.690560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.690937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-05-13 20:47:36.690945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.691281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.691650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.691659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.692031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.692380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.692389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.692765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.692969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.692977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.693167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.693658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.693668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.694001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.694195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.694204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.694546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.694752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.694762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.695070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.695381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.695391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.695767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.696113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.696122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.696452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.696811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.696821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.697195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.697556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.697566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.697776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.698092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.698101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.698435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.698835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.698844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.699179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.699367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.699377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.699698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.700092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.700101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.700326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.700561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.700571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.700911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.701273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.701282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.701642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.702008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.702017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.702253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.702467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.702478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.702833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.703174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.703182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.703536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.703740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.703748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.703980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.704152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.704162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.704387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.704580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.704589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.704763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.704975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.704983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.705180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.705513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.705523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.705901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.706254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.706262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.706603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.707010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.707019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.707410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.707604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.707613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.707827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.708046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.708055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.708274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.708513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.708522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-05-13 20:47:36.708745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-05-13 20:47:36.709057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.709067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.709406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.709606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.709615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.709920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.710123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.710132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.710480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.710862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.710870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.711199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.711399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.711408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.711650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.711878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.711888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.712325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.712666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.712676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.712877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.713210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.713219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.713683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.714026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.714034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.714349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.714558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.714568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.714903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.715261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.715270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.715565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.715775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.715784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.716161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.716532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.716541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.716884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.717164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.717173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.717484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.717852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.717860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.718204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.718542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.718551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.718885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.719075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.719085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.719149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.719528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.719537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.719867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.720130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.720138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.720503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.720887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.720896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.721254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.721446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.721455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.721782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.722111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.722120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.722534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.722874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.722884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.723257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.723469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.723479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.723824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.724210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.724219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.724455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.724833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.724842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.725198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.725566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.725575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.725903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.725955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.725963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-05-13 20:47:36.726281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.726646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-05-13 20:47:36.726655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.726949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.727321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.727330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.727682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.728060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.728069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.728281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.728489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.728498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.728858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.729222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.729231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.729435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.729811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.729820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.730101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.730282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.730290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.730636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.731014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.731023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.731205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.731371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.731382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.731608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.731869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.731878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.732281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.732438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.732448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.732669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.733019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.733029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.733412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.733764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.733773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.734148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.734490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.734499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.734859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.735219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.735228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.735576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.735941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.735950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.736182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.736552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.736561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.736850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.736914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.736922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.737117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.737455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.737468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.737818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.738119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.738128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.738517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.738821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.738830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.739212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.739426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.739435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.739759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.740130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.740139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.740489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.740866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.740875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.741208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.741596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.741605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.741937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.742274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.742283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.742433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.742672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.742682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.742886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.743255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.743263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.743619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.743833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.743844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.744042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.744417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.744426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-05-13 20:47:36.744774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-05-13 20:47:36.744989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.744998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.745359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.745719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.745727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.746104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.746504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.746514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.746718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.746942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.746953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.747250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.747619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.747629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.748007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.748234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.748244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.748657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.748753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.748763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.749057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.749247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.749256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.749601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.749939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.749951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.750327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.750689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.750698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.751029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.751317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.751327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.751641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.752022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.752031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.752358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.752706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.752714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.753046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.753375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.753385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.753751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.754096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.754105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.754276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.754687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.754696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.754750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.755080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.755090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.755434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.755776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.755785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.755843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.756176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.756186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.756543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.756756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.756765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.757118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.757503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.757512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.757570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.757765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.757775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.757998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.758221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.758230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.758568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.758932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.758941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.759275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.759489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.759498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.759832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.760197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.760206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.760572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.760943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.760952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.761330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.761683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.761693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.762048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.762384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.762393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-05-13 20:47:36.762764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.763086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-05-13 20:47:36.763096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.763427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.763807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.763815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.764172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.764528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.764538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.764895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.765306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.765322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.765686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.766060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.766069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.766399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.766746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.766754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.767036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.767233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.767243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.767449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.767824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.767833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.768167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.768517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.768526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.768919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.769132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.769141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.769311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.769573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.769582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.769915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.770100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.770109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.770319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.770681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.770690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.771018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.771388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.771398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.771758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.772100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.772108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.772305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.772516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.772527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.772795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.773016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.773025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.773399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.773765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.773774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.773978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.774370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.774379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.774741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.775080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.775090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-05-13 20:47:36.775557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-05-13 20:47:36.775741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.775749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.776094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.776465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.776475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.776856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.777209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.777218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.777562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.777908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.777917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.778288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.778627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.778636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.779055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.779273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.779282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.779666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.780011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.780019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.780384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.780615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.780625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.780959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.781342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.781352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.781563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.781927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.781936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.782311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.782628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.782638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.782897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.783293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.783302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.783692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.784024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.784033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.784223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.784467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.784476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.784820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.785030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.785039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.785461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.785821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.785831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.786192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.786307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.786323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.786623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.786834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.786843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.787171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.787272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.787281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.787632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.787868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.787877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.788082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.788427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.788437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.788778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.789017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.789026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.789398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.789619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.789628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.789778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.790095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.790105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.790392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.790769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.790778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.791125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.791474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.791483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.791864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.791921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.791930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.792280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.792598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.792608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.792952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.793292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.793301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.793504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.793725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-05-13 20:47:36.793735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-05-13 20:47:36.794086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.794468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.794478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.794817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.795198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.795208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.795582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.796003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.796012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.796321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.796794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.796803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.797154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.797523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.797532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.797861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.798248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.798257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.798609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.798973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.798983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.799337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.799531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.799540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.799904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.800109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.800118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.800450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.800808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.800817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.801158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.801369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.801378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.801658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.802040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.802050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.802422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.802773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.802783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.803136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.803343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.803353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.803646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.803989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.803998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.804374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.804703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.804713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.804936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.805251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.805261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.805479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.805863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.805873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.806071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.806304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.806325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.806728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.807069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.807078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.807378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.807768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.807778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.807994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.808230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.808241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.808432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.808818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.808828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.809201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.809563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.809573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.809917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.810041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.810049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.810387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.810746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.810755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.811086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.811297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.811306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.811656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.812002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.812011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.812276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.812474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.812485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-05-13 20:47:36.812837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.812902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-05-13 20:47:36.812910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.813242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.813499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.813509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.813873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.814085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.814096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.814383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.814735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.814744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.814980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.815346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.815355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.815664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.816042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.816051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.816301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.816660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.816670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.816998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.817375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.817385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.817592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.817915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.817923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.818109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.818500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.818509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.818846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.819169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.819178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.819542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.819902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.819911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.820321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.820678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.820688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.821021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.821408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.821418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.821771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.822148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.822158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.822341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.822543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.822553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.822922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.823142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.823152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.823535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.823871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.823880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.824233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.824572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.824582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.824916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.825178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.825187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.825539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.825886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.825895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.826161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.826414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.826425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.826718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.827077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.827087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.827468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.827676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.827685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.827916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.828183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.828193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.828400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.828710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.828719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.829093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.829351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.829362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.829602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.829806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.829815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.830181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.830562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.830571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.830952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.831157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.831166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-05-13 20:47:36.831349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-05-13 20:47:36.831760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.831770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.831987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.832193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.832205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.832558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.832784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.832794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.833177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.833528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.833538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.833775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.834181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.834190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.834417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.834717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.834726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.834933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.835104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.835114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.835450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.835621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.835630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.836041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.836425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.836434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.836752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.836945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.836953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.837293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.837634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.837643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.837977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.838321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.838333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.838756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.839090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.839099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.839299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.839637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.839646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.840073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.840418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.840427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.840800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.841145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.841154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.841365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.841723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.841732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.841994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.842341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.842351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.842705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.842907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.842916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.843282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.843682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.843691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.844032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.844412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.844421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.844778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.845003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.845013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.845435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.845631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.845640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.846048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.846250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.846258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.846633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.846830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-05-13 20:47:36.846839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-05-13 20:47:36.847063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.847228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.847236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.847598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.847957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.847965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.848333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.848677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.848686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.848867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.849256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.849265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.849445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.849796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.849804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.850011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.850358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.850367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.850727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.850992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.851004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.851184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.851421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.851431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.851813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.852073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.852081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.852275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.852608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.852617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.852961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.853298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.853307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.853685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.854054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.854064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.854436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.854796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.854805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.855070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.855292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.855302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.855673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.855891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.855900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.856278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.856505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.856514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.856876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.857213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.857222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.857578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.857839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.857848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.858243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.858644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.858653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.859013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.859240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.859249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.859593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.859783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.859792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.859984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.860162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.860171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.860528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.860915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.860924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.861147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.861493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.861502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.861648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.861724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.861732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.862087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.862444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.862453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.862740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.863031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.863039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.863268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.863674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.863683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.864014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:21.051 [2024-05-13 20:47:36.864390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-05-13 20:47:36.864399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.051 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-05-13 20:47:36.864731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.052 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:21.052 [2024-05-13 20:47:36.865080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.865089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.052 [2024-05-13 20:47:36.865425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.865665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.865674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.865879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.866263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.866272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.866526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.866596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.866605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.866804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.867128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.867137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.867373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.867727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.867736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.867804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.867952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.867961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.868165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.868501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.868511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.868868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.869094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.869104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.869471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.869828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.869838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.870193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.870460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.870469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.870706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.870918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.870927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.871286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.871649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.871658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.872039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.872387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.872396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.872747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.873113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.873123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.873459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.873838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.873846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.874052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.874425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.874437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.874658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.874994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.875003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.875185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.875381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.875391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.875786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.876122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.876132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.876507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.876885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.876894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.877087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.877328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.877338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.877582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.877925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.877934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.878286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.878368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.878376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.878728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.878899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.878907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.879232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.879431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.879440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.879707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.879889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.879901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.880304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.880656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.880667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-05-13 20:47:36.881082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.881371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-05-13 20:47:36.881381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.881790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.882015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.882024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.882355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.882703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.882711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.882776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.883046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.883055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.883258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.883601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.883611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.883987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.884240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.884249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.884602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.884971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.884981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.885131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.885195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.885205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.885526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.885792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.885805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.886180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.886565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.886574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.886780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.887005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.887014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.887365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.887730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.887739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.888069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.888432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.888442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.888691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.888927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.888936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.889347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.889688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.889697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.890025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.890397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.890407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.890715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.891139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.891148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.891326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.891673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.891682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.891969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.892194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.892205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.892539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.892796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.892805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.893146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.893197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.893205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.893561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.893756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.893766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.893987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.894328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.894338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.894570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.894922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.894931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.895281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.895552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.895561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.895891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.896260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.896268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.896620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.897009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.897018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.897233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.897473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.897482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.897908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.898199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.898209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.898566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.898909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.898918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-05-13 20:47:36.899251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-05-13 20:47:36.899620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.899629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.899976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.900336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.900345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.900519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.900944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.900953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.901291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.901623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.901633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.902016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.902311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.902324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.902797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.903127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.903136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.903604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.054 [2024-05-13 20:47:36.903975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.903989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:21.054 [2024-05-13 20:47:36.904200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.904439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.904451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.054 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.054 [2024-05-13 20:47:36.904798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.905016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.905025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.905233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.905444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.905453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.905737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.905972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.905981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.906395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.906771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.906781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.907000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.907168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.907178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.907346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.907678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.907687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.907995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.908276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.908286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.908648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.908863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.908872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.909128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.909380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.909389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.909761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.909974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.909986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.910161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.910406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.910416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.910585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.910994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.911003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.911383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.911698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.911707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.912082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.912434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.912443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.912624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.912846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.912856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.913197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.913570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.913579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.913953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.914349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.914358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.914713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.915076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.915085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.915445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.915670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.915680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-05-13 20:47:36.916120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-05-13 20:47:36.916432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.916444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.916823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.917123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.917132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.917446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.917818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.917827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.918009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.918202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.918211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.918597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.918938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.918947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.919188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.919396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.919406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 Malloc0 00:34:21.055 [2024-05-13 20:47:36.919727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.920064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.920074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.055 [2024-05-13 20:47:36.920429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:21.055 [2024-05-13 20:47:36.920814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.920823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.055 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.055 [2024-05-13 20:47:36.921157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.921522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.921530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.921759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.921966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.921976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.922187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.922516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.922526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.922881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.923226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.923235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.923299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.923677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.923688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.924039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.924405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.924415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.924660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.925010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.925018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.925346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.925698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.925706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.925983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.926355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.926364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.926727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.926996] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.055 [2024-05-13 20:47:36.927084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.927093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.927318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.927669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.927678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.928016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.928089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.928098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.928513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.928892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.928901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.929275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.929641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.929650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.930019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.930287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.930296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-05-13 20:47:36.930720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-05-13 20:47:36.931057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.931067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.931440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.931859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.931868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.932212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.932543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.932553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.932905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.933240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.933248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.933598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.933964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.933973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.934305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.934653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.934662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.935035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.935401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.935410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.935781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.056 [2024-05-13 20:47:36.936150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.936159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.056 [2024-05-13 20:47:36.936521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.936642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.936651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.056 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.056 [2024-05-13 20:47:36.937015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.937381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.937390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.937684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.937887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.937896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.938274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.938494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.938503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.938864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.939250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.939259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.939452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.939778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.939787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.940120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.940420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.940429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.940670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.941056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.941064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.941454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.941831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.941840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.942210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.942571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.942580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.942950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.943288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.943297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.943650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.944025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.944033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.944395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.944755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.944764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.944976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.945286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.945295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.945672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.945729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.945737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.946060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.946274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.946283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.946623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.946953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.946962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.947321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.947540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.947550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 [2024-05-13 20:47:36.947802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.948027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 [2024-05-13 20:47:36.948036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.056 qpair failed and we were unable to recover it. 00:34:21.056 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.056 [2024-05-13 20:47:36.948373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.056 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.057 [2024-05-13 20:47:36.948464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.948474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.948675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.057 [2024-05-13 20:47:36.948902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.948911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.057 [2024-05-13 20:47:36.949136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.949507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.949516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.949722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.950120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.950128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.950336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.950678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.950687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.950980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.951359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.951369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.951679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.952067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.952075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.952416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.952765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.952774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.953108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.953279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.953289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.953522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.953873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.953881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.954265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.954633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.954642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.954828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.955143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.955152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.955509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.955726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.955734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.956092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.956468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.956477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.956853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.957200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.957208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.957556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.957652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.957661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.958009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.958350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.958359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.958776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.958964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.958972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.959181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.959415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.959424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.959770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.057 [2024-05-13 20:47:36.960129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.960138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.960322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.057 [2024-05-13 20:47:36.960649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.960658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.057 [2024-05-13 20:47:36.961018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.057 [2024-05-13 20:47:36.961436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.961445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.961843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.962191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.962200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.962508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.962697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.962706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.963053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.963442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.963451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.963690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.963878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.963890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.964178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.964468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.964478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.057 [2024-05-13 20:47:36.964838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.965176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.057 [2024-05-13 20:47:36.965185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.057 qpair failed and we were unable to recover it. 00:34:21.058 [2024-05-13 20:47:36.965391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.965643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.965652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-05-13 20:47:36.966008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.966064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.966072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-05-13 20:47:36.966381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.966748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.966756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd354000b90 with addr=10.0.0.2, port=4420 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 [2024-05-13 20:47:36.967076] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:21.058 [2024-05-13 20:47:36.967131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.058 [2024-05-13 20:47:36.967309] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.058 [2024-05-13 20:47:36.969862] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:34:21.058 [2024-05-13 20:47:36.969902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fd354000b90 (107): Transport endpoint is not connected 00:34:21.058 [2024-05-13 20:47:36.969941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.058 qpair failed and we were unable to recover it. 00:34:21.058 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.058 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.058 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.058 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.321 [2024-05-13 20:47:36.977718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:36.977837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:36.977856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:36.977867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:36.977873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:36.977890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.321 20:47:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 3302297 00:34:21.321 [2024-05-13 20:47:36.987625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:36.987713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:36.987729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:36.987737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:36.987743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:36.987759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:36.997566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:36.997640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:36.997655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:36.997662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:36.997669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:36.997683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.007630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:37.007706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:37.007721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:37.007729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:37.007735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:37.007749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.017623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:37.017730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:37.017745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:37.017753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:37.017762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:37.017777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.027663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:37.027733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:37.027748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:37.027755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:37.027761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:37.027775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.037688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:37.037757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:37.037774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:37.037782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:37.037788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:37.037803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.047688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:37.047754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:37.047770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:37.047777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:37.047783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:37.047797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.057728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.321 [2024-05-13 20:47:37.057788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.321 [2024-05-13 20:47:37.057803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.321 [2024-05-13 20:47:37.057810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.321 [2024-05-13 20:47:37.057816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.321 [2024-05-13 20:47:37.057830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.321 qpair failed and we were unable to recover it. 00:34:21.321 [2024-05-13 20:47:37.067787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.067889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.067905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.067911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.067917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.067932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.077938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.078043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.078059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.078066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.078072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.078086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.087785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.087853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.087868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.087875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.087881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.087895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.097845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.097909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.097924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.097930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.097936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.097950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.107748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.107828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.107844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.107855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.107861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.107875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.117878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.117948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.117963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.117970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.117976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.117990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.127939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.128013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.128029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.128036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.128042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.128057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.137978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.138094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.138118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.138126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.138133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.138151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.147982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.148096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.148113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.148120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.148126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.148141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.158030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.158106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.158122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.158129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.158136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.158150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.167947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.168028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.168043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.168051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.168057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.168071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.178054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.178167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.178183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.178190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.178196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.178210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.188094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.188171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.188186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.188194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.188200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.188213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.198101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.322 [2024-05-13 20:47:37.198169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.322 [2024-05-13 20:47:37.198184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.322 [2024-05-13 20:47:37.198198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.322 [2024-05-13 20:47:37.198205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.322 [2024-05-13 20:47:37.198219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.322 qpair failed and we were unable to recover it. 00:34:21.322 [2024-05-13 20:47:37.208051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.323 [2024-05-13 20:47:37.208122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.323 [2024-05-13 20:47:37.208138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.323 [2024-05-13 20:47:37.208145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.323 [2024-05-13 20:47:37.208151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.323 [2024-05-13 20:47:37.208165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.323 qpair failed and we were unable to recover it. 00:34:21.323 [2024-05-13 20:47:37.218188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.323 [2024-05-13 20:47:37.218255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.323 [2024-05-13 20:47:37.218271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.323 [2024-05-13 20:47:37.218277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.323 [2024-05-13 20:47:37.218283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.323 [2024-05-13 20:47:37.218297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.323 qpair failed and we were unable to recover it. 00:34:21.323 [2024-05-13 20:47:37.228231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.323 [2024-05-13 20:47:37.228298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.323 [2024-05-13 20:47:37.228317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.323 [2024-05-13 20:47:37.228325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.323 [2024-05-13 20:47:37.228331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.323 [2024-05-13 20:47:37.228345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.323 qpair failed and we were unable to recover it. 00:34:21.323 [2024-05-13 20:47:37.238212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.323 [2024-05-13 20:47:37.238311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.323 [2024-05-13 20:47:37.238334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.323 [2024-05-13 20:47:37.238341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.323 [2024-05-13 20:47:37.238347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.323 [2024-05-13 20:47:37.238362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.323 qpair failed and we were unable to recover it. 00:34:21.323 [2024-05-13 20:47:37.248249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.323 [2024-05-13 20:47:37.248323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.323 [2024-05-13 20:47:37.248340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.323 [2024-05-13 20:47:37.248347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.323 [2024-05-13 20:47:37.248361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.323 [2024-05-13 20:47:37.248376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.323 qpair failed and we were unable to recover it. 00:34:21.323 [2024-05-13 20:47:37.258373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.323 [2024-05-13 20:47:37.258454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.323 [2024-05-13 20:47:37.258470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.323 [2024-05-13 20:47:37.258476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.323 [2024-05-13 20:47:37.258482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.323 [2024-05-13 20:47:37.258497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.323 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.268368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.268434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.268450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.268457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.268463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.268478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.278277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.278348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.278364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.278371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.278377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.278391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.288453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.288551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.288573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.288580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.288587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.288601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.298386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.298448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.298463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.298470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.298476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.298492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.308453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.308513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.308529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.308536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.308542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.308557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.318449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.318516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.318531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.318538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.318544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.318558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.328493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.328603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.328619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.328626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.328632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.328650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.338507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.338576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.338592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.338599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.338605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.338619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.348545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.348604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.348619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.348626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.348632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.348646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.358555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.358647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.358662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.585 [2024-05-13 20:47:37.358669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.585 [2024-05-13 20:47:37.358675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.585 [2024-05-13 20:47:37.358689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.585 qpair failed and we were unable to recover it. 00:34:21.585 [2024-05-13 20:47:37.368661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.585 [2024-05-13 20:47:37.368783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.585 [2024-05-13 20:47:37.368799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.368806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.368812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.368826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.378513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.378611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.378630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.378637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.378643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.378657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.388641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.388724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.388740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.388746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.388753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.388767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.398556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.398622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.398637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.398644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.398651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.398665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.408703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.408784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.408800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.408807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.408813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.408827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.418751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.418833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.418849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.418856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.418866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.418881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.428809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.428878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.428894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.428901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.428907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.428921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.438766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.438877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.438893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.438900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.438906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.438922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.448796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.448865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.448881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.448888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.448894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.448910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.458710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.458776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.458792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.458799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.458806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.458821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.468846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.468918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.468934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.468941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.468947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.468961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.478892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.479007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.479023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.479030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.479037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.479051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.488930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.489006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.489030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.489038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.489045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.489063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.498933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.499005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.586 [2024-05-13 20:47:37.499029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.586 [2024-05-13 20:47:37.499037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.586 [2024-05-13 20:47:37.499043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.586 [2024-05-13 20:47:37.499062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.586 qpair failed and we were unable to recover it. 00:34:21.586 [2024-05-13 20:47:37.508959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.586 [2024-05-13 20:47:37.509066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.587 [2024-05-13 20:47:37.509090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.587 [2024-05-13 20:47:37.509098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.587 [2024-05-13 20:47:37.509108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.587 [2024-05-13 20:47:37.509126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.587 qpair failed and we were unable to recover it. 00:34:21.587 [2024-05-13 20:47:37.518919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.587 [2024-05-13 20:47:37.518989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.587 [2024-05-13 20:47:37.519013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.587 [2024-05-13 20:47:37.519021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.587 [2024-05-13 20:47:37.519028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.587 [2024-05-13 20:47:37.519046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.587 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.529039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.529173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.529197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.529205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.529212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.529231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.538940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.539008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.539025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.539032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.539038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.539054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.549068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.549157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.549173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.549180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.549186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.549200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.559156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.559225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.559241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.559248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.559254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.559268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.569170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.569250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.569266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.569273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.569279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.569293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.579174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.579244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.579260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.579267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.579273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.579287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.589174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.589274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.589290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.589297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.589303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.589323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.599197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.599327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.599343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.599353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.599360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.599374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.609276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.609349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.609366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.609373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.609379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.609393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.619257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.619325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.619341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.619348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.619354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.619369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.629301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.629369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.629385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.629392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.629398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.629412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.639319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.639384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.639400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.639406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.639412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.639427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.863 [2024-05-13 20:47:37.649355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.863 [2024-05-13 20:47:37.649421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.863 [2024-05-13 20:47:37.649437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.863 [2024-05-13 20:47:37.649444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.863 [2024-05-13 20:47:37.649450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.863 [2024-05-13 20:47:37.649464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.863 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.659353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.659423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.659439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.659446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.659452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.659466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.669447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.669509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.669524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.669531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.669537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.669552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.679490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.679557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.679572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.679579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.679585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.679599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.689456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.689523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.689542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.689549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.689555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.689570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.699386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.699458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.699474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.699481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.699487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.699501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.709503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.709572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.709587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.709594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.709600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.709614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.719556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.719622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.719638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.719645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.719651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.719665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.729575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.729643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.729659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.729666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.729672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.729689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.739603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.739669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.739684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.739691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.739697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.739711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.749659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.749723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.749739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.749746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.749751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.749766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.759659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.759723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.759738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.759745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.759751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.759765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.769726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.769841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.769856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.769863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.769869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.769883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.779743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.779811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.779830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.779837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.779843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.779857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.864 qpair failed and we were unable to recover it. 00:34:21.864 [2024-05-13 20:47:37.789823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.864 [2024-05-13 20:47:37.789933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.864 [2024-05-13 20:47:37.789948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.864 [2024-05-13 20:47:37.789955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.864 [2024-05-13 20:47:37.789961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.864 [2024-05-13 20:47:37.789975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.865 qpair failed and we were unable to recover it. 00:34:21.865 [2024-05-13 20:47:37.799772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.865 [2024-05-13 20:47:37.799884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.865 [2024-05-13 20:47:37.799899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.865 [2024-05-13 20:47:37.799906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.865 [2024-05-13 20:47:37.799912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:21.865 [2024-05-13 20:47:37.799926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.865 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.809795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.809869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.809893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.809902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.809909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.809927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.819818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.819894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.819911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.819918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.819928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.819943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.829850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.829915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.829932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.829939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.829946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.829961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.839774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.839842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.839858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.839865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.839871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.839890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.849920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.850015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.850031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.850038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.850044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.850058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.859987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.860067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.860083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.860092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.860098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.860113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.869952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.870037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.870053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.870060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.870066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.870080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.879999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.880065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.880081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.880088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.880094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.880108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.890024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.890089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.890105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.890112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.890118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.890132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.900046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.900118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.900133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.900140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.900146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.900160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.910106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.910198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.910214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.910220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.910230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.910244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.920104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.920171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.920186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.920193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.920199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.128 [2024-05-13 20:47:37.920213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-05-13 20:47:37.930025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.128 [2024-05-13 20:47:37.930095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.128 [2024-05-13 20:47:37.930111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.128 [2024-05-13 20:47:37.930118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.128 [2024-05-13 20:47:37.930124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.930138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:37.940168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:37.940234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:37.940250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:37.940257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:37.940263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.940277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:37.950214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:37.950282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:37.950298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:37.950305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:37.950311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.950331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:37.960213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:37.960281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:37.960296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:37.960303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:37.960309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.960329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:37.970239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:37.970308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:37.970328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:37.970335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:37.970341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.970355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:37.980277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:37.980349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:37.980365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:37.980372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:37.980378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.980392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:37.990327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:37.990433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:37.990448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:37.990455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:37.990461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:37.990476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.000330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.000400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.000416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.000426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.000432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.000447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.010290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.010387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.010403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.010410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.010416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.010430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.020385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.020449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.020465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.020472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.020478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.020492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.030497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.030565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.030580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.030587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.030593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.030607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.040466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.040536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.040551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.040558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.040564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.040578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.050548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.050623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.050638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.050645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.050651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.050665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-05-13 20:47:38.060522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.129 [2024-05-13 20:47:38.060588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.129 [2024-05-13 20:47:38.060603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.129 [2024-05-13 20:47:38.060609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.129 [2024-05-13 20:47:38.060615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.129 [2024-05-13 20:47:38.060629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.070550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.070610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.070625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.070633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.070639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.070653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.080590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.080659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.080674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.080681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.080687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.080701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.090651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.090723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.090742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.090749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.090755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.090769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.100620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.100684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.100699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.100706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.100712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.100726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.110668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.110759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.110775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.110782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.110788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.110802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.120734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.120849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.120864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.120871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.120878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.120892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.130604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.130672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.130687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.130695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.130701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.130724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.140766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.140836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.140851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.140858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.140865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.140879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.150752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.150821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.150837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.150844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.150849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.150864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.160790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.160855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.160871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.160878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.160884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.160898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.170814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.170888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.170903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.170910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.170916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.170930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.180844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.180912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.180931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.392 [2024-05-13 20:47:38.180938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.392 [2024-05-13 20:47:38.180944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.392 [2024-05-13 20:47:38.180958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.392 qpair failed and we were unable to recover it. 00:34:22.392 [2024-05-13 20:47:38.190875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.392 [2024-05-13 20:47:38.190944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.392 [2024-05-13 20:47:38.190960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.190967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.190973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.190987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.200939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.201006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.201022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.201028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.201034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.201048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.210937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.211010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.211034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.211042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.211049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.211068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.220957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.221024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.221048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.221056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.221063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.221086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.230992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.231058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.231075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.231082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.231088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.231103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.241033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.241100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.241117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.241124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.241130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.241144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.251056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.251147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.251163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.251170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.251176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.251191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.261053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.261121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.261137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.261143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.261149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.261164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.271064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.271128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.271144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.271151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.271157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.271171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.281150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.281214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.281230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.281237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.281243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.281257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.291175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.291246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.291262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.291269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.291275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.291289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.301225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.301296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.301312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.301324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.301330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.301344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.311206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.311268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.311284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.311291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.311300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.311318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.393 [2024-05-13 20:47:38.321254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.393 [2024-05-13 20:47:38.321324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.393 [2024-05-13 20:47:38.321340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.393 [2024-05-13 20:47:38.321347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.393 [2024-05-13 20:47:38.321353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.393 [2024-05-13 20:47:38.321367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.393 qpair failed and we were unable to recover it. 00:34:22.394 [2024-05-13 20:47:38.331272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.394 [2024-05-13 20:47:38.331343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.394 [2024-05-13 20:47:38.331359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.394 [2024-05-13 20:47:38.331366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.394 [2024-05-13 20:47:38.331372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.394 [2024-05-13 20:47:38.331387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.394 qpair failed and we were unable to recover it. 00:34:22.656 [2024-05-13 20:47:38.341189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.656 [2024-05-13 20:47:38.341254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.656 [2024-05-13 20:47:38.341270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.656 [2024-05-13 20:47:38.341277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.656 [2024-05-13 20:47:38.341283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.656 [2024-05-13 20:47:38.341298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.656 qpair failed and we were unable to recover it. 00:34:22.656 [2024-05-13 20:47:38.351330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.656 [2024-05-13 20:47:38.351399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.656 [2024-05-13 20:47:38.351414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.656 [2024-05-13 20:47:38.351422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.656 [2024-05-13 20:47:38.351428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.656 [2024-05-13 20:47:38.351442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.656 qpair failed and we were unable to recover it. 00:34:22.656 [2024-05-13 20:47:38.361373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.656 [2024-05-13 20:47:38.361456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.656 [2024-05-13 20:47:38.361472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.656 [2024-05-13 20:47:38.361478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.656 [2024-05-13 20:47:38.361485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.656 [2024-05-13 20:47:38.361499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.656 qpair failed and we were unable to recover it. 00:34:22.656 [2024-05-13 20:47:38.371381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.656 [2024-05-13 20:47:38.371503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.656 [2024-05-13 20:47:38.371519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.656 [2024-05-13 20:47:38.371528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.656 [2024-05-13 20:47:38.371533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.656 [2024-05-13 20:47:38.371548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.656 qpair failed and we were unable to recover it. 00:34:22.656 [2024-05-13 20:47:38.381417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.656 [2024-05-13 20:47:38.381479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.656 [2024-05-13 20:47:38.381495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.656 [2024-05-13 20:47:38.381502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.656 [2024-05-13 20:47:38.381508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.381522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.391460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.391524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.391540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.391547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.391553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.391567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.401496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.401579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.401594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.401605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.401611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.401625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.411528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.411615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.411630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.411637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.411644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.411658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.421525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.421590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.421606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.421613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.421619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.421633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.431491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.431580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.431595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.431602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.431608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.431622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.441608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.441680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.441696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.441703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.441709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.441723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.451594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.451660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.451675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.451683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.451689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.451702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.461531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.461630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.461646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.461653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.461659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.461673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.471735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.471805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.471821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.471827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.471833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.471847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.481678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.481747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.481762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.481769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.481775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.481789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.491709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.491776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.491791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.491801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.491807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.491822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.501626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.501695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.501711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.501718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.501724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.501738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.511755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.511825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.511840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.657 [2024-05-13 20:47:38.511847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.657 [2024-05-13 20:47:38.511854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.657 [2024-05-13 20:47:38.511868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.657 qpair failed and we were unable to recover it. 00:34:22.657 [2024-05-13 20:47:38.521771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.657 [2024-05-13 20:47:38.521840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.657 [2024-05-13 20:47:38.521856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.521863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.521869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.521883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.531815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.531882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.531899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.531906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.531913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.531927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.541819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.541881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.541897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.541904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.541910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.541925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.551756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.551821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.551837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.551845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.551851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.551866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.561897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.561964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.561980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.561987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.561993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.562008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.571812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.571892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.571908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.571915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.571921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.571936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.581956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.582022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.582041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.582048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.582054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.582068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.658 [2024-05-13 20:47:38.592063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.658 [2024-05-13 20:47:38.592122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.658 [2024-05-13 20:47:38.592140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.658 [2024-05-13 20:47:38.592148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.658 [2024-05-13 20:47:38.592155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.658 [2024-05-13 20:47:38.592169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.658 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.602024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.602093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.602109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.602116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.602122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.602137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.612048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.612126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.612150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.612159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.612165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.612183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.621957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.622019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.622036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.622043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.622050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.622075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.631981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.632046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.632062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.632069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.632075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.632090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.642124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.642190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.642206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.642213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.642219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.642234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.652152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.652220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.652236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.652243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.652249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.652263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.662089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.662156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.662172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.662179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.662185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.662200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.672099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.672169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.672188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.672196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.672202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.672216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.920 qpair failed and we were unable to recover it. 00:34:22.920 [2024-05-13 20:47:38.682132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.920 [2024-05-13 20:47:38.682207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.920 [2024-05-13 20:47:38.682223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.920 [2024-05-13 20:47:38.682230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.920 [2024-05-13 20:47:38.682236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.920 [2024-05-13 20:47:38.682251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.692271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.692347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.692363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.692370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.692376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.692391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.702290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.702353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.702369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.702376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.702382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.702396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.712322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.712387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.712403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.712410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.712423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.712438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.722377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.722447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.722463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.722470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.722476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.722490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.732373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.732445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.732461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.732468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.732474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.732489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.742431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.742532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.742547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.742554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.742561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.742575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.752457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.752521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.752537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.752544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.752550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.752564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.762452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.762520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.762536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.762543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.762549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.762564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.772487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.772555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.772571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.772578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.772584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.772598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.782521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.782648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.782667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.782674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.782680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.782695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.792536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.792602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.792617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.792624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.792630] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.792645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.802578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.802642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.802657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.802668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.802674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.802688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.812639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.812717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.812733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.812740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.812747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.921 [2024-05-13 20:47:38.812761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.921 qpair failed and we were unable to recover it. 00:34:22.921 [2024-05-13 20:47:38.822650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.921 [2024-05-13 20:47:38.822710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.921 [2024-05-13 20:47:38.822726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.921 [2024-05-13 20:47:38.822733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.921 [2024-05-13 20:47:38.822739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.922 [2024-05-13 20:47:38.822754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.922 qpair failed and we were unable to recover it. 00:34:22.922 [2024-05-13 20:47:38.832621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.922 [2024-05-13 20:47:38.832689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.922 [2024-05-13 20:47:38.832705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.922 [2024-05-13 20:47:38.832712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.922 [2024-05-13 20:47:38.832718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.922 [2024-05-13 20:47:38.832732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.922 qpair failed and we were unable to recover it. 00:34:22.922 [2024-05-13 20:47:38.842697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.922 [2024-05-13 20:47:38.842763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.922 [2024-05-13 20:47:38.842779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.922 [2024-05-13 20:47:38.842786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.922 [2024-05-13 20:47:38.842792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.922 [2024-05-13 20:47:38.842805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.922 qpair failed and we were unable to recover it. 00:34:22.922 [2024-05-13 20:47:38.852715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.922 [2024-05-13 20:47:38.852788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.922 [2024-05-13 20:47:38.852804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.922 [2024-05-13 20:47:38.852810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.922 [2024-05-13 20:47:38.852816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:22.922 [2024-05-13 20:47:38.852831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.922 qpair failed and we were unable to recover it. 00:34:22.922 [2024-05-13 20:47:38.862747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.862814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.862830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.862837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.862845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.862862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.872756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.872827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.872842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.872849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.872855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.872869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.882793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.882859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.882875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.882881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.882887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.882901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.892874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.892945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.892960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.892971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.892977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.892991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.902859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.902926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.902942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.902949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.902955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.902969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.912777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.912845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.912861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.912868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.912874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.912888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.922921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.922986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.923001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.923008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.923015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.923029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.932931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.933035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.933059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.933068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.933074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.933093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.943009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.943103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.943126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.943135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.943141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.943160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.953008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.953074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.953091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.953098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.953104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.953119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.963024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.963098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.963122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.963130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.963137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.963155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.973066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.973140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.973157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.973164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.184 [2024-05-13 20:47:38.973170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.184 [2024-05-13 20:47:38.973185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.184 qpair failed and we were unable to recover it. 00:34:23.184 [2024-05-13 20:47:38.983077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.184 [2024-05-13 20:47:38.983141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.184 [2024-05-13 20:47:38.983161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.184 [2024-05-13 20:47:38.983168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:38.983174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:38.983189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:38.993145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:38.993211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:38.993226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:38.993233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:38.993239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:38.993253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.003137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.003207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.003223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.003230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.003236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.003250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.013173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.013245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.013260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.013267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.013274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.013288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.023070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.023136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.023151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.023158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.023165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.023183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.033241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.033305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.033325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.033332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.033338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.033353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.043158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.043221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.043237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.043244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.043250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.043264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.053252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.053328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.053344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.053351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.053357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.053371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.063267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.063336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.063352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.063359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.063365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.063380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.073339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.073404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.073423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.073430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.073436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.073451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.083307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.083392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.083408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.083415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.083421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.083440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.093375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.093449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.093464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.093471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.093477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.093492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.103417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.103521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.103536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.103543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.103549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.103564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.113375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.113473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.113488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.185 [2024-05-13 20:47:39.113495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.185 [2024-05-13 20:47:39.113505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.185 [2024-05-13 20:47:39.113519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.185 qpair failed and we were unable to recover it. 00:34:23.185 [2024-05-13 20:47:39.123487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.185 [2024-05-13 20:47:39.123578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.185 [2024-05-13 20:47:39.123594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.186 [2024-05-13 20:47:39.123601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.186 [2024-05-13 20:47:39.123607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.186 [2024-05-13 20:47:39.123621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.186 qpair failed and we were unable to recover it. 00:34:23.448 [2024-05-13 20:47:39.133487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.448 [2024-05-13 20:47:39.133598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.448 [2024-05-13 20:47:39.133614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.448 [2024-05-13 20:47:39.133621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.448 [2024-05-13 20:47:39.133628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.448 [2024-05-13 20:47:39.133642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.448 qpair failed and we were unable to recover it. 00:34:23.448 [2024-05-13 20:47:39.143445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.448 [2024-05-13 20:47:39.143505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.448 [2024-05-13 20:47:39.143521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.448 [2024-05-13 20:47:39.143528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.448 [2024-05-13 20:47:39.143534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.448 [2024-05-13 20:47:39.143548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.448 qpair failed and we were unable to recover it. 00:34:23.448 [2024-05-13 20:47:39.153540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.448 [2024-05-13 20:47:39.153608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.448 [2024-05-13 20:47:39.153623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.448 [2024-05-13 20:47:39.153630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.448 [2024-05-13 20:47:39.153636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.448 [2024-05-13 20:47:39.153650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.448 qpair failed and we were unable to recover it. 00:34:23.448 [2024-05-13 20:47:39.163579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.448 [2024-05-13 20:47:39.163652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.448 [2024-05-13 20:47:39.163668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.448 [2024-05-13 20:47:39.163675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.448 [2024-05-13 20:47:39.163681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.448 [2024-05-13 20:47:39.163695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.173546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.173639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.173655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.173662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.173668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.173683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.183584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.183644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.183659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.183666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.183672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.183686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.193664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.193731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.193746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.193753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.193759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.193774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.203639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.203711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.203726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.203733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.203743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.203757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.213728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.213790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.213805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.213812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.213818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.213833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.223710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.223764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.223780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.223787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.223793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.223807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.233661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.233724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.233740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.233747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.233752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.233766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.243746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.243805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.243821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.243828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.243834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.243848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.253826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.253892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.253908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.253915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.253921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.253935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.263917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.264001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.264017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.264025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.264031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.264046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.273900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.274004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.274020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.274027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.274033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.274047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.283896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.283957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.283972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.283979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.283985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.283999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.293975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.294036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.294052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.449 [2024-05-13 20:47:39.294062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.449 [2024-05-13 20:47:39.294068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.449 [2024-05-13 20:47:39.294083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.449 qpair failed and we were unable to recover it. 00:34:23.449 [2024-05-13 20:47:39.303890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.449 [2024-05-13 20:47:39.303949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.449 [2024-05-13 20:47:39.303964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.303972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.303978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.303992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.313969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.314033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.314049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.314056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.314062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.314076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.323983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.324045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.324060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.324067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.324073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.324087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.334034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.334127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.334143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.334150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.334156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.334170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.344007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.344117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.344133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.344140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.344146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.344160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.354080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.354140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.354156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.354163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.354169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.354183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.364066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.364136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.364153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.364160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.364166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.364180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.374147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.374211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.374227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.374234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.374240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.374254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.450 [2024-05-13 20:47:39.384004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.450 [2024-05-13 20:47:39.384067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.450 [2024-05-13 20:47:39.384086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.450 [2024-05-13 20:47:39.384093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.450 [2024-05-13 20:47:39.384099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.450 [2024-05-13 20:47:39.384113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.450 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.394249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.394319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.394335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.394342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.394348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.394363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.404157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.404215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.404230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.404237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.404243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.404257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.414139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.414210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.414225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.414233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.414239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.414253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.424221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.424289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.424304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.424311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.424322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.424340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.434251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.434324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.434339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.434347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.434353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.434367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.444309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.444413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.444429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.444436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.444443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.444458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.454356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.454423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.454439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.454445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.454451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.454466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.464381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.464482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.464498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.464505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.464511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.464525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.474295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.474363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.474386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.474393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.474400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.474414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.484414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.484476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.713 [2024-05-13 20:47:39.484492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.713 [2024-05-13 20:47:39.484498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.713 [2024-05-13 20:47:39.484504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.713 [2024-05-13 20:47:39.484518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-05-13 20:47:39.494445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.713 [2024-05-13 20:47:39.494526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.494542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.494552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.494558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.494574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.504455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.504509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.504525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.504532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.504538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.504552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.514590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.514656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.514671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.514678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.514687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.514702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.524391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.524451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.524467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.524474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.524480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.524494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.534554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.534634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.534649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.534656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.534662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.534676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.544663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.544756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.544771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.544778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.544784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.544799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.554608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.554670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.554686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.554693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.554699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.554713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.564602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.564666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.564681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.564688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.564694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.564708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.574650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.574719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.574734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.574741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.574747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.574761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.584653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.584710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.584725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.584732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.584738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.584752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.594718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.594775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.594791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.594798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.594804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.594817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.604754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.604839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.604855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.604862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.604871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.604885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.614733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.614795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.614811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.614819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.614825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.614840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.624765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.714 [2024-05-13 20:47:39.624822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.714 [2024-05-13 20:47:39.624838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.714 [2024-05-13 20:47:39.624845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.714 [2024-05-13 20:47:39.624852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.714 [2024-05-13 20:47:39.624866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-05-13 20:47:39.634828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.715 [2024-05-13 20:47:39.634888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.715 [2024-05-13 20:47:39.634903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.715 [2024-05-13 20:47:39.634911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.715 [2024-05-13 20:47:39.634917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.715 [2024-05-13 20:47:39.634931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-05-13 20:47:39.644753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.715 [2024-05-13 20:47:39.644863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.715 [2024-05-13 20:47:39.644878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.715 [2024-05-13 20:47:39.644885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.715 [2024-05-13 20:47:39.644891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.715 [2024-05-13 20:47:39.644906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-05-13 20:47:39.654858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.715 [2024-05-13 20:47:39.654925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.715 [2024-05-13 20:47:39.654941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.715 [2024-05-13 20:47:39.654948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.715 [2024-05-13 20:47:39.654954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.715 [2024-05-13 20:47:39.654969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.977 [2024-05-13 20:47:39.664836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.664897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.664913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.664920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.664926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.664941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.674827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.674899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.674914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.674921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.674927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.674942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.684919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.684976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.684991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.684998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.685005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.685019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.694964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.695035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.695059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.695072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.695079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.695097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.704886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.704960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.704984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.704993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.704999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.705018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.715078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.715145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.715169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.715177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.715184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.715202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.725046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.725108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.725127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.725134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.725140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.725155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.735080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.735144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.735160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.735167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.735173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.735188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.745105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.745168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.745183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.745190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.745196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.745210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.755165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.755228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.755244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.755250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.755256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.755271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.765151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.765211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.765227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.765234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.765240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.765254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.775189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.775254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.775270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.775277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.775283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.775297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.785079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.785152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.785170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.785178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.785184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.785198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.795277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.795344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.795359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.978 [2024-05-13 20:47:39.795366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.978 [2024-05-13 20:47:39.795372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.978 [2024-05-13 20:47:39.795386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.978 qpair failed and we were unable to recover it. 00:34:23.978 [2024-05-13 20:47:39.805266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.978 [2024-05-13 20:47:39.805336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.978 [2024-05-13 20:47:39.805351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.805358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.805364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.805379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.815293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.815355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.815371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.815378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.815384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.815398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.825376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.825447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.825462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.825469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.825476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.825500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.835258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.835333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.835349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.835356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.835362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.835377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.845352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.845413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.845428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.845435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.845441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.845456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.855417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.855485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.855500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.855507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.855513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.855527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.865403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.865457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.865473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.865480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.865486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.865500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.875468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.875528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.875547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.875555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.875560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.875574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.885472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.885532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.885548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.885555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.885561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.885575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.895527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.895588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.895604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.895610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.895617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.895631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.905518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.905588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.905603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.905610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.905616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.905631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:23.979 [2024-05-13 20:47:39.915584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.979 [2024-05-13 20:47:39.915643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.979 [2024-05-13 20:47:39.915658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.979 [2024-05-13 20:47:39.915665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.979 [2024-05-13 20:47:39.915672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:23.979 [2024-05-13 20:47:39.915689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:23.979 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.925598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.925659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.925675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.925682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.925688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.925702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.935602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.935661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.935677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.935684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.935690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.935704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.945632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.945695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.945710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.945717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.945723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.945737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.955694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.955750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.955764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.955771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.955777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.955792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.965682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.965745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.965761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.965768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.965774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.965788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.975703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.975767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.975782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.975789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.975795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.975809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.985729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.985786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.985801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.985808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.985814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.985828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:39.995793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:39.995855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:39.995871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:39.995878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:39.995883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:39.995897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:40.005892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:40.005956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:40.005972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:40.005979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:40.005989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:40.006003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:40.015818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:40.015929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:40.015946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:40.015954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:40.015960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:40.015974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:40.025861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.243 [2024-05-13 20:47:40.025919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.243 [2024-05-13 20:47:40.025934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.243 [2024-05-13 20:47:40.025941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.243 [2024-05-13 20:47:40.025948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.243 [2024-05-13 20:47:40.025962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.243 qpair failed and we were unable to recover it. 00:34:24.243 [2024-05-13 20:47:40.035916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.035976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.035992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.035999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.036005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.036019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.045895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.045963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.045987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.045996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.046003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.046022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.055818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.055878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.055896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.055904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.055910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.055933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.065955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.066019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.066035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.066042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.066048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.066063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.076040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.076100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.076116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.076123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.076129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.076144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.086011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.086071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.086087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.086094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.086100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.086114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.096049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.096114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.096130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.096142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.096148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.096162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.106069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.106129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.106145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.106152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.106159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.106173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.116202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.116309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.116329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.116337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.116343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.116358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.126013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.126073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.126088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.126095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.126101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.126116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.136139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.136200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.136216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.136223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.136229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.136244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.146051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.146119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.146135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.146142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.146148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.146162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.156105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.156170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.156186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.156193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.156199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.156213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.166202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.244 [2024-05-13 20:47:40.166267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.244 [2024-05-13 20:47:40.166283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.244 [2024-05-13 20:47:40.166290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.244 [2024-05-13 20:47:40.166296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.244 [2024-05-13 20:47:40.166311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.244 qpair failed and we were unable to recover it. 00:34:24.244 [2024-05-13 20:47:40.176295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.245 [2024-05-13 20:47:40.176373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.245 [2024-05-13 20:47:40.176389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.245 [2024-05-13 20:47:40.176397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.245 [2024-05-13 20:47:40.176403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.245 [2024-05-13 20:47:40.176418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.245 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.186153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.186214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.186230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.186241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.186247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.186261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.196317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.196377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.196392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.196399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.196405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.196420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.206325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.206393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.206408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.206415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.206422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.206436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.216304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.216413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.216429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.216436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.216442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.216456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.226362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.226468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.226483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.226490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.226496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.226511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.236332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.236414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.236429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.236437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.236443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.236457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.246455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.246516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.246531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.246538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.246544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.246558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.256483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.256545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.256560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.256567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.256573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.256588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.266527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.266588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.266604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.266610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.266616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.266631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.276565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.276630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.276651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.276659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.276665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.276680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.507 [2024-05-13 20:47:40.286552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.507 [2024-05-13 20:47:40.286612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.507 [2024-05-13 20:47:40.286628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.507 [2024-05-13 20:47:40.286634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.507 [2024-05-13 20:47:40.286640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.507 [2024-05-13 20:47:40.286654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.507 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.296574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.296635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.296651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.296658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.296664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.296678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.306608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.306668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.306683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.306690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.306696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.306710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.316667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.316730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.316745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.316752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.316758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.316776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.326662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.326723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.326738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.326745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.326751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.326765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.336691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.336761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.336776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.336783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.336790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.336803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.346719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.346782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.346797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.346804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.346810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.346824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.356640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.356711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.356727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.356733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.356740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.356753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.366742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.366806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.366825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.366832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.366838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.366852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.376765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.376831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.376847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.376853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.376860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.376874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.386816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.386874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.386889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.386896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.386902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.386916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.396766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.396866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.396881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.396889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.396894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.396908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.406746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.406804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.406820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.406827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.406836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.508 [2024-05-13 20:47:40.406856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.508 qpair failed and we were unable to recover it. 00:34:24.508 [2024-05-13 20:47:40.416781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.508 [2024-05-13 20:47:40.416895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.508 [2024-05-13 20:47:40.416910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.508 [2024-05-13 20:47:40.416917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.508 [2024-05-13 20:47:40.416924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.509 [2024-05-13 20:47:40.416938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.509 qpair failed and we were unable to recover it. 00:34:24.509 [2024-05-13 20:47:40.426909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.509 [2024-05-13 20:47:40.426970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.509 [2024-05-13 20:47:40.426985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.509 [2024-05-13 20:47:40.426992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.509 [2024-05-13 20:47:40.426998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.509 [2024-05-13 20:47:40.427012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.509 qpair failed and we were unable to recover it. 00:34:24.509 [2024-05-13 20:47:40.436959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.509 [2024-05-13 20:47:40.437021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.509 [2024-05-13 20:47:40.437036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.509 [2024-05-13 20:47:40.437043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.509 [2024-05-13 20:47:40.437049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.509 [2024-05-13 20:47:40.437063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.509 qpair failed and we were unable to recover it. 00:34:24.509 [2024-05-13 20:47:40.446955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.509 [2024-05-13 20:47:40.447015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.509 [2024-05-13 20:47:40.447031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.509 [2024-05-13 20:47:40.447038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.509 [2024-05-13 20:47:40.447044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.509 [2024-05-13 20:47:40.447058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.509 qpair failed and we were unable to recover it. 00:34:24.771 [2024-05-13 20:47:40.456877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.771 [2024-05-13 20:47:40.456945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.771 [2024-05-13 20:47:40.456960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.771 [2024-05-13 20:47:40.456967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.771 [2024-05-13 20:47:40.456973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.771 [2024-05-13 20:47:40.456987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.771 qpair failed and we were unable to recover it. 00:34:24.771 [2024-05-13 20:47:40.467004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.771 [2024-05-13 20:47:40.467057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.771 [2024-05-13 20:47:40.467072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.771 [2024-05-13 20:47:40.467079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.771 [2024-05-13 20:47:40.467085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.771 [2024-05-13 20:47:40.467100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.771 qpair failed and we were unable to recover it. 00:34:24.771 [2024-05-13 20:47:40.477080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.771 [2024-05-13 20:47:40.477175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.771 [2024-05-13 20:47:40.477190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.771 [2024-05-13 20:47:40.477197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.771 [2024-05-13 20:47:40.477204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.771 [2024-05-13 20:47:40.477218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.771 qpair failed and we were unable to recover it. 00:34:24.771 [2024-05-13 20:47:40.487076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.771 [2024-05-13 20:47:40.487137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.771 [2024-05-13 20:47:40.487153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.771 [2024-05-13 20:47:40.487160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.771 [2024-05-13 20:47:40.487167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.771 [2024-05-13 20:47:40.487181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.771 qpair failed and we were unable to recover it. 00:34:24.771 [2024-05-13 20:47:40.497148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.771 [2024-05-13 20:47:40.497211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.771 [2024-05-13 20:47:40.497227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.771 [2024-05-13 20:47:40.497237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.771 [2024-05-13 20:47:40.497243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.771 [2024-05-13 20:47:40.497257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.771 qpair failed and we were unable to recover it. 00:34:24.771 [2024-05-13 20:47:40.507123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.771 [2024-05-13 20:47:40.507186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.507201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.507208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.507214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.507228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.517188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.517252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.517267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.517275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.517281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.517296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.527177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.527234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.527249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.527256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.527262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.527276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.537202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.537268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.537284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.537291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.537297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.537311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.547266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.547364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.547380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.547387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.547393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.547408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.557202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.557267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.557283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.557290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.557296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.557310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.567175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.567232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.567248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.567256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.567262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.567277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.577317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.577378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.577393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.577400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.577407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.577421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.587261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.587322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.587338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.587348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.587354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.587369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.597392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.597453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.597469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.597476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.597482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.597496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.607361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.607419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.607435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.607442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.607448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.607462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.617410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.617529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.617544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.617552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.617558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.617572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.627332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.627392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.627407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.627414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.627420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.627434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.637524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.772 [2024-05-13 20:47:40.637584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.772 [2024-05-13 20:47:40.637600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.772 [2024-05-13 20:47:40.637607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.772 [2024-05-13 20:47:40.637613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.772 [2024-05-13 20:47:40.637627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.772 qpair failed and we were unable to recover it. 00:34:24.772 [2024-05-13 20:47:40.647518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.647575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.647590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.647597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.647604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.647617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:24.773 [2024-05-13 20:47:40.657400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.657465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.657480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.657487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.657493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.657507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:24.773 [2024-05-13 20:47:40.667549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.667644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.667662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.667669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.667675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.667689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:24.773 [2024-05-13 20:47:40.677616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.677676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.677695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.677702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.677708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.677722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:24.773 [2024-05-13 20:47:40.687644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.687701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.687717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.687724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.687730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.687745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:24.773 [2024-05-13 20:47:40.697526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.697586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.697602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.697610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.697616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.697631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:24.773 [2024-05-13 20:47:40.707665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.773 [2024-05-13 20:47:40.707728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.773 [2024-05-13 20:47:40.707744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.773 [2024-05-13 20:47:40.707751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.773 [2024-05-13 20:47:40.707757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:24.773 [2024-05-13 20:47:40.707771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.773 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.717775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.717834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.717849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.717856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.717863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.717881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.727705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.727803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.727818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.727825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.727831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.727846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.737637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.737701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.737717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.737724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.737730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.737744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.747765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.747824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.747839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.747846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.747853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.747867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.757867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.757934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.757949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.757956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.757962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.757977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.767820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.767880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.767899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.767906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.767912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.767926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.777861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.777919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.777934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.777942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.777948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.777962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.787875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.787930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.787947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.787954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.787960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.035 [2024-05-13 20:47:40.787975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.035 qpair failed and we were unable to recover it. 00:34:25.035 [2024-05-13 20:47:40.797934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.035 [2024-05-13 20:47:40.797994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.035 [2024-05-13 20:47:40.798011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.035 [2024-05-13 20:47:40.798018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.035 [2024-05-13 20:47:40.798024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.798039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.807955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.808018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.808034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.808041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.808051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.808065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.817969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.818028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.818044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.818051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.818057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.818071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.827965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.828029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.828044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.828051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.828057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.828072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.838033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.838143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.838158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.838165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.838171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.838185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.848036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.848097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.848112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.848119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.848126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.848140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.858025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.858085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.858101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.858108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.858114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.858128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.868080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.868140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.868156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.868163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.868169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.868183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.878156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.878217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.878233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.878240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.878246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.878260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.888031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.888092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.888107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.888114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.888120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.888134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.898070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.898138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.898153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.898160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.898171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.898185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.908192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.908252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.908268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.908275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.908281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.908295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.918261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.918336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.918355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.918362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.918368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.918383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.928323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.928385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.928400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.036 [2024-05-13 20:47:40.928408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.036 [2024-05-13 20:47:40.928414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.036 [2024-05-13 20:47:40.928428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.036 qpair failed and we were unable to recover it. 00:34:25.036 [2024-05-13 20:47:40.938278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.036 [2024-05-13 20:47:40.938346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.036 [2024-05-13 20:47:40.938362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.037 [2024-05-13 20:47:40.938369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.037 [2024-05-13 20:47:40.938375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.037 [2024-05-13 20:47:40.938389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.037 qpair failed and we were unable to recover it. 00:34:25.037 [2024-05-13 20:47:40.948316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.037 [2024-05-13 20:47:40.948378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.037 [2024-05-13 20:47:40.948393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.037 [2024-05-13 20:47:40.948400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.037 [2024-05-13 20:47:40.948406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.037 [2024-05-13 20:47:40.948421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.037 qpair failed and we were unable to recover it. 00:34:25.037 [2024-05-13 20:47:40.958389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.037 [2024-05-13 20:47:40.958445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.037 [2024-05-13 20:47:40.958461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.037 [2024-05-13 20:47:40.958468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.037 [2024-05-13 20:47:40.958474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.037 [2024-05-13 20:47:40.958489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.037 qpair failed and we were unable to recover it. 00:34:25.037 [2024-05-13 20:47:40.968388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.037 [2024-05-13 20:47:40.968447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.037 [2024-05-13 20:47:40.968463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.037 [2024-05-13 20:47:40.968470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.037 [2024-05-13 20:47:40.968476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.037 [2024-05-13 20:47:40.968491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.037 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:40.978391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:40.978489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:40.978505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:40.978512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:40.978518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:40.978533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:40.988329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:40.988417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:40.988433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:40.988447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:40.988453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:40.988468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:40.998481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:40.998547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:40.998563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:40.998570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:40.998576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:40.998590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:41.008457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:41.008518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:41.008533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:41.008541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:41.008547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:41.008561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:41.018492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:41.018552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:41.018568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:41.018575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:41.018581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:41.018595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:41.028531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:41.028585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:41.028602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:41.028609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:41.028615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:41.028629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:41.038571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:41.038639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:41.038655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.300 [2024-05-13 20:47:41.038662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.300 [2024-05-13 20:47:41.038668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.300 [2024-05-13 20:47:41.038682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-05-13 20:47:41.048471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.300 [2024-05-13 20:47:41.048530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.300 [2024-05-13 20:47:41.048546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.048553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.048560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.048574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.058614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.058706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.058722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.058729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.058735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.058749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.068633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.068689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.068705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.068712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.068718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.068732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.078579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.078639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.078658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.078665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.078671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.078685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.088685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.088747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.088762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.088771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.088780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.088794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.098718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.098782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.098798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.098805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.098811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.098825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.108645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.108707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.108723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.108730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.108736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.108750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.118806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.118865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.118880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.118887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.118894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.118911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.128774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.128846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.128861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.128868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.128874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.128888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.138849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.138915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.138930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.138937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.138943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.138957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.148721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.148779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.148794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.148800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.148807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.148821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.158921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.158985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.159001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.159008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.159014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.159028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.168905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.168962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.168981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.168988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.168995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.169009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.179021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.179116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.301 [2024-05-13 20:47:41.179131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.301 [2024-05-13 20:47:41.179138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.301 [2024-05-13 20:47:41.179144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.301 [2024-05-13 20:47:41.179159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-05-13 20:47:41.188842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.301 [2024-05-13 20:47:41.188899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.302 [2024-05-13 20:47:41.188915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.302 [2024-05-13 20:47:41.188922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.302 [2024-05-13 20:47:41.188928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.302 [2024-05-13 20:47:41.188943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-05-13 20:47:41.199023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.302 [2024-05-13 20:47:41.199085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.302 [2024-05-13 20:47:41.199101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.302 [2024-05-13 20:47:41.199108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.302 [2024-05-13 20:47:41.199114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.302 [2024-05-13 20:47:41.199128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-05-13 20:47:41.209019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.302 [2024-05-13 20:47:41.209079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.302 [2024-05-13 20:47:41.209095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.302 [2024-05-13 20:47:41.209102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.302 [2024-05-13 20:47:41.209111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.302 [2024-05-13 20:47:41.209125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-05-13 20:47:41.219059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.302 [2024-05-13 20:47:41.219123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.302 [2024-05-13 20:47:41.219139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.302 [2024-05-13 20:47:41.219146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.302 [2024-05-13 20:47:41.219152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.302 [2024-05-13 20:47:41.219166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-05-13 20:47:41.229138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.302 [2024-05-13 20:47:41.229206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.302 [2024-05-13 20:47:41.229221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.302 [2024-05-13 20:47:41.229228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.302 [2024-05-13 20:47:41.229234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.302 [2024-05-13 20:47:41.229248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-05-13 20:47:41.239142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.302 [2024-05-13 20:47:41.239249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.302 [2024-05-13 20:47:41.239264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.302 [2024-05-13 20:47:41.239271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.302 [2024-05-13 20:47:41.239277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.302 [2024-05-13 20:47:41.239291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.564 [2024-05-13 20:47:41.249127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.564 [2024-05-13 20:47:41.249184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.564 [2024-05-13 20:47:41.249200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.564 [2024-05-13 20:47:41.249206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.564 [2024-05-13 20:47:41.249213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.564 [2024-05-13 20:47:41.249227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.564 qpair failed and we were unable to recover it. 00:34:25.564 [2024-05-13 20:47:41.259133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.564 [2024-05-13 20:47:41.259201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.564 [2024-05-13 20:47:41.259216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.564 [2024-05-13 20:47:41.259223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.564 [2024-05-13 20:47:41.259229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.564 [2024-05-13 20:47:41.259243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.564 qpair failed and we were unable to recover it. 00:34:25.564 [2024-05-13 20:47:41.269262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.564 [2024-05-13 20:47:41.269329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.564 [2024-05-13 20:47:41.269345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.564 [2024-05-13 20:47:41.269352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.564 [2024-05-13 20:47:41.269358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.564 [2024-05-13 20:47:41.269372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.564 qpair failed and we were unable to recover it. 00:34:25.564 [2024-05-13 20:47:41.279242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.564 [2024-05-13 20:47:41.279309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.564 [2024-05-13 20:47:41.279330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.564 [2024-05-13 20:47:41.279337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.564 [2024-05-13 20:47:41.279343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.564 [2024-05-13 20:47:41.279357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.289254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.289321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.289337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.289343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.289349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.289363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.299321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.299393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.299408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.299415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.299425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.299439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.309289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.309349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.309365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.309372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.309378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.309392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.319344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.319408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.319423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.319430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.319436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.319451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.329347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.329406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.329422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.329429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.329434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.329449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.339377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.339437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.339452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.339459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.339465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.339479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.349408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.349464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.349479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.349486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.349492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.349507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.359457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.359518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.359534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.359541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.359547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.359561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.369455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.369511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.369527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.369534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.369540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.369554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.379494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.379557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.379572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.379579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.379585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.379599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.389529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.389589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.389604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.389614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.389620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.389635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.399554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.399609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.399625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.399631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.399637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.399651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.409553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.409612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.409627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.409634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.409640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.565 [2024-05-13 20:47:41.409654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.565 qpair failed and we were unable to recover it. 00:34:25.565 [2024-05-13 20:47:41.419597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.565 [2024-05-13 20:47:41.419663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.565 [2024-05-13 20:47:41.419678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.565 [2024-05-13 20:47:41.419685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.565 [2024-05-13 20:47:41.419691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.419705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.429625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.429744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.429760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.429767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.429773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.429787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.439681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.439784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.439801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.439808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.439814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.439828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.449577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.449638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.449654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.449661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.449667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.449681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.459716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.459782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.459798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.459805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.459811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.459825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.469786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.469863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.469879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.469886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.469892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.469906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.479744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.479801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.479819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.479826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.479832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.479846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.489793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.489852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.489867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.489874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.489880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.489894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.566 [2024-05-13 20:47:41.499838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.566 [2024-05-13 20:47:41.499947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.566 [2024-05-13 20:47:41.499964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.566 [2024-05-13 20:47:41.499971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.566 [2024-05-13 20:47:41.499977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.566 [2024-05-13 20:47:41.499993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.566 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.509876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.509936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.509951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.509959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.509965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.509979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.519868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.519924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.519939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.519946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.519952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.519971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.529778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.529835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.529851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.529859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.529864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.529879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.539914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.539980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.539995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.540002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.540008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.540022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.549940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.550004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.550028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.550036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.550043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.550062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.559977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.560091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.560115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.560123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.560130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.560149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.570001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.570069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.570096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.570105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.570111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.570129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.580034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.580105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.580128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.580136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.580143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.580161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.590043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.590101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.590119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.590126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.590132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.590147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.600073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.600129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.600146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.600153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.600159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.600173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.610104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.610175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.610190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.610197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.610203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.610222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.620135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.620200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.620215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.620222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.829 [2024-05-13 20:47:41.620228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.829 [2024-05-13 20:47:41.620243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-05-13 20:47:41.630153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.829 [2024-05-13 20:47:41.630211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.829 [2024-05-13 20:47:41.630227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.829 [2024-05-13 20:47:41.630234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.630240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.630254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.640212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.640271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.640287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.640294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.640300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.640320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.650224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.650284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.650299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.650306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.650316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.650331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.660131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.660200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.660217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.660224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.660230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.660250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.670256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.670311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.670332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.670339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.670345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.670360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.680304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.680364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.680379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.680386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.680392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.680407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.690378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.690453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.690469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.690476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.690482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.690496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.700319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.700382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.700397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.700404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.700414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.700428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.710376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.710436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.710451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.710458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.710465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.710479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.720427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.720492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.720507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.720514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.720520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.720535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.730460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.730520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.730536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.730543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.730549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.730563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.740475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.740539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.740555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.740562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.740568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.740582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.750404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.750510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.750527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.750534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.750540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.750554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.760544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.760605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.830 [2024-05-13 20:47:41.760621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.830 [2024-05-13 20:47:41.760628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.830 [2024-05-13 20:47:41.760634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.830 [2024-05-13 20:47:41.760648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-05-13 20:47:41.770538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.830 [2024-05-13 20:47:41.770602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.831 [2024-05-13 20:47:41.770617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.831 [2024-05-13 20:47:41.770624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.831 [2024-05-13 20:47:41.770630] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:25.831 [2024-05-13 20:47:41.770645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.831 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.780462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.780525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.780540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.780547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.780553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.780567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.790680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.790738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.790753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.790764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.790770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.790784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.800633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.800692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.800707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.800714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.800721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.800735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.810651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.810709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.810724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.810732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.810738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.810752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.820706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.820766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.820782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.820789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.820795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.820810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.830629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.830733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.830749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.830756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.830761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.830780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.840734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.840790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.840806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.840813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.840819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.840833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.850778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.850839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.850854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.850861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.093 [2024-05-13 20:47:41.850867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.093 [2024-05-13 20:47:41.850882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.093 qpair failed and we were unable to recover it. 00:34:26.093 [2024-05-13 20:47:41.860853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.093 [2024-05-13 20:47:41.860923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.093 [2024-05-13 20:47:41.860939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.093 [2024-05-13 20:47:41.860945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.860952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.860966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.870840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.870941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.870957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.870964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.870970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.870984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.880788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.880848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.880863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.880873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.880879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.880894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.890926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.890984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.891000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.891007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.891013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.891027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.900938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.900998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.901014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.901021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.901027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.901041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.910825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.910884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.910899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.910906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.910912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.910926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.920986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.921047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.921062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.921069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.921074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.921088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.930888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.930977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.930992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.931000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.931006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.931020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.940990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.941062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.941077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.941084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.941090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.941104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.951040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.951117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.951132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.951139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.951145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.951159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.961110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.961202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.961217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.961224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.961230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.961244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.971191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.971252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.971271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.971278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.971284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.971298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.981137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.981264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.981280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.981287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.981293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.981307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:41.991145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:41.991204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:41.991219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.094 [2024-05-13 20:47:41.991226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.094 [2024-05-13 20:47:41.991232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.094 [2024-05-13 20:47:41.991246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.094 qpair failed and we were unable to recover it. 00:34:26.094 [2024-05-13 20:47:42.001179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.094 [2024-05-13 20:47:42.001239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.094 [2024-05-13 20:47:42.001254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.095 [2024-05-13 20:47:42.001261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.095 [2024-05-13 20:47:42.001267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.095 [2024-05-13 20:47:42.001281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.095 qpair failed and we were unable to recover it. 00:34:26.095 [2024-05-13 20:47:42.011207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.095 [2024-05-13 20:47:42.011265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.095 [2024-05-13 20:47:42.011280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.095 [2024-05-13 20:47:42.011287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.095 [2024-05-13 20:47:42.011293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.095 [2024-05-13 20:47:42.011311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.095 qpair failed and we were unable to recover it. 00:34:26.095 [2024-05-13 20:47:42.021238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.095 [2024-05-13 20:47:42.021296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.095 [2024-05-13 20:47:42.021311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.095 [2024-05-13 20:47:42.021323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.095 [2024-05-13 20:47:42.021329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.095 [2024-05-13 20:47:42.021343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.095 qpair failed and we were unable to recover it. 00:34:26.095 [2024-05-13 20:47:42.031261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.095 [2024-05-13 20:47:42.031324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.095 [2024-05-13 20:47:42.031340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.095 [2024-05-13 20:47:42.031347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.095 [2024-05-13 20:47:42.031353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.095 [2024-05-13 20:47:42.031367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.095 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.041275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.041338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.041354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.041361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.041367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.041381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.051315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.051377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.051392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.051399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.051406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.051420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.061339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.061404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.061423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.061430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.061436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.061450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.071362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.071479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.071494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.071501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.071507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.071521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.081352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.081412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.081427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.081434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.081440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.081455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.091445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.091543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.091558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.091565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.091571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.091586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.101467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.101578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.101594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.101601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.101610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.101624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.111374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.111439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.357 [2024-05-13 20:47:42.111454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.357 [2024-05-13 20:47:42.111461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.357 [2024-05-13 20:47:42.111467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.357 [2024-05-13 20:47:42.111481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.357 qpair failed and we were unable to recover it. 00:34:26.357 [2024-05-13 20:47:42.121534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.357 [2024-05-13 20:47:42.121592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.121607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.121614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.121620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.121634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.131533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.131590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.131605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.131612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.131619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.131632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.141582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.141680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.141696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.141703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.141710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.141724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.151579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.151638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.151653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.151660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.151666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.151680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.161629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.161689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.161704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.161711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.161717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.161731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.171640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.171699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.171715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.171722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.171728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.171742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.181671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.181736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.181752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.181759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.181765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.181778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.191647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.191719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.191735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.191745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.191751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.191765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.201595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.201653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.201669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.201676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.201682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.201696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.211792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.211852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.211869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.211876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.211882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.211898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.221769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.221833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.221849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.221855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.221861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.221876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.231797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.231856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.231872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.231879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.231885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.231900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.241717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.241791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.241807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.241814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.241820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.241835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.251849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.358 [2024-05-13 20:47:42.251919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.358 [2024-05-13 20:47:42.251934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.358 [2024-05-13 20:47:42.251941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.358 [2024-05-13 20:47:42.251947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.358 [2024-05-13 20:47:42.251961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.358 qpair failed and we were unable to recover it. 00:34:26.358 [2024-05-13 20:47:42.261762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.359 [2024-05-13 20:47:42.261840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.359 [2024-05-13 20:47:42.261856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.359 [2024-05-13 20:47:42.261863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.359 [2024-05-13 20:47:42.261869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.359 [2024-05-13 20:47:42.261883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.359 qpair failed and we were unable to recover it. 00:34:26.359 [2024-05-13 20:47:42.271905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.359 [2024-05-13 20:47:42.272003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.359 [2024-05-13 20:47:42.272020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.359 [2024-05-13 20:47:42.272028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.359 [2024-05-13 20:47:42.272034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.359 [2024-05-13 20:47:42.272049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.359 qpair failed and we were unable to recover it. 00:34:26.359 [2024-05-13 20:47:42.281947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.359 [2024-05-13 20:47:42.282001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.359 [2024-05-13 20:47:42.282017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.359 [2024-05-13 20:47:42.282027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.359 [2024-05-13 20:47:42.282034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.359 [2024-05-13 20:47:42.282048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.359 qpair failed and we were unable to recover it. 00:34:26.359 [2024-05-13 20:47:42.291954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.359 [2024-05-13 20:47:42.292016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.359 [2024-05-13 20:47:42.292039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.359 [2024-05-13 20:47:42.292048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.359 [2024-05-13 20:47:42.292054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd354000b90 00:34:26.359 [2024-05-13 20:47:42.292073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.359 qpair failed and we were unable to recover it. 00:34:26.359 [2024-05-13 20:47:42.292482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21520f0 is same with the state(5) to be set 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Write completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 Read completed with error (sct=0, sc=8) 00:34:26.359 starting I/O failed 00:34:26.359 [2024-05-13 20:47:42.292961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.620 [2024-05-13 20:47:42.302014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.620 [2024-05-13 20:47:42.302085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.620 [2024-05-13 20:47:42.302111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.620 [2024-05-13 20:47:42.302123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.621 [2024-05-13 20:47:42.302130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2144520 00:34:26.621 [2024-05-13 20:47:42.302149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.621 qpair failed and we were unable to recover it. 00:34:26.621 [2024-05-13 20:47:42.311978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.621 [2024-05-13 20:47:42.312054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.621 [2024-05-13 20:47:42.312078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.621 [2024-05-13 20:47:42.312087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.621 [2024-05-13 20:47:42.312093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2144520 00:34:26.621 [2024-05-13 20:47:42.312111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.621 qpair failed and we were unable to recover it. 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 [2024-05-13 20:47:42.312968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:26.621 [2024-05-13 20:47:42.322084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.621 [2024-05-13 20:47:42.322225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.621 [2024-05-13 20:47:42.322277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.621 [2024-05-13 20:47:42.322299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.621 [2024-05-13 20:47:42.322341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd35c000b90 00:34:26.621 [2024-05-13 20:47:42.322390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:26.621 qpair failed and we were unable to recover it. 00:34:26.621 [2024-05-13 20:47:42.332081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.621 [2024-05-13 20:47:42.332194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.621 [2024-05-13 20:47:42.332226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.621 [2024-05-13 20:47:42.332240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.621 [2024-05-13 20:47:42.332254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd35c000b90 00:34:26.621 [2024-05-13 20:47:42.332285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:26.621 qpair failed and we were unable to recover it. 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Write completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 Read completed with error (sct=0, sc=8) 00:34:26.621 starting I/O failed 00:34:26.621 [2024-05-13 20:47:42.332701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.621 [2024-05-13 20:47:42.342112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.621 [2024-05-13 20:47:42.342201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.621 [2024-05-13 20:47:42.342216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.621 [2024-05-13 20:47:42.342221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.621 [2024-05-13 20:47:42.342226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd360000b90 00:34:26.621 [2024-05-13 20:47:42.342244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.621 qpair failed and we were unable to recover it. 00:34:26.621 [2024-05-13 20:47:42.352118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.621 [2024-05-13 20:47:42.352168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.621 [2024-05-13 20:47:42.352181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.621 [2024-05-13 20:47:42.352186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.621 [2024-05-13 20:47:42.352191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd360000b90 00:34:26.621 [2024-05-13 20:47:42.352202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.621 qpair failed and we were unable to recover it. 00:34:26.621 [2024-05-13 20:47:42.352553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21520f0 (9): Bad file descriptor 00:34:26.621 Initializing NVMe Controllers 00:34:26.621 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:26.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:26.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:26.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:26.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:26.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:26.621 Initialization complete. Launching workers. 00:34:26.622 Starting thread on core 1 00:34:26.622 Starting thread on core 2 00:34:26.622 Starting thread on core 3 00:34:26.622 Starting thread on core 0 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:34:26.622 00:34:26.622 real 0m11.336s 00:34:26.622 user 0m20.967s 00:34:26.622 sys 0m3.666s 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.622 ************************************ 00:34:26.622 END TEST nvmf_target_disconnect_tc2 00:34:26.622 ************************************ 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:26.622 rmmod nvme_tcp 00:34:26.622 rmmod nvme_fabrics 00:34:26.622 rmmod nvme_keyring 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3303137 ']' 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3303137 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3303137 ']' 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3303137 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3303137 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3303137' 00:34:26.622 killing process with pid 3303137 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3303137 00:34:26.622 [2024-05-13 20:47:42.524577] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:26.622 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3303137 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:26.883 20:47:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.796 20:47:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:28.796 00:34:28.796 real 0m22.218s 00:34:28.796 user 0m48.828s 00:34:28.796 sys 0m10.050s 00:34:28.796 20:47:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:28.796 20:47:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:28.796 ************************************ 00:34:28.796 END TEST nvmf_target_disconnect 00:34:28.796 ************************************ 00:34:29.057 20:47:44 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:34:29.057 20:47:44 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.057 20:47:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.057 20:47:44 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:34:29.057 00:34:29.057 real 27m31.109s 00:34:29.057 user 68m31.674s 00:34:29.057 sys 7m52.065s 00:34:29.057 20:47:44 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.057 20:47:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.057 ************************************ 00:34:29.057 END TEST nvmf_tcp 00:34:29.057 ************************************ 00:34:29.057 20:47:44 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:34:29.057 20:47:44 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.057 20:47:44 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:29.057 20:47:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:29.057 20:47:44 -- common/autotest_common.sh@10 -- # set +x 00:34:29.057 ************************************ 00:34:29.057 START TEST spdkcli_nvmf_tcp 00:34:29.057 ************************************ 00:34:29.057 20:47:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.057 * Looking for test storage... 00:34:29.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:29.057 20:47:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:29.057 20:47:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:29.057 20:47:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:29.057 20:47:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.057 20:47:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3304957 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3304957 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3304957 ']' 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:29.319 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.319 [2024-05-13 20:47:45.085062] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:34:29.319 [2024-05-13 20:47:45.085136] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304957 ] 00:34:29.319 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.319 [2024-05-13 20:47:45.155344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:29.319 [2024-05-13 20:47:45.230594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.319 [2024-05-13 20:47:45.230683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.929 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:29.929 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:29.929 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:29.929 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.929 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.189 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:30.189 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:30.189 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:30.189 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:30.189 20:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.190 20:47:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:30.190 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:30.190 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:30.190 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:30.190 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:30.190 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:30.190 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:30.190 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.190 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.190 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:30.190 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:30.190 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:30.190 ' 00:34:32.731 [2024-05-13 20:47:48.222368] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.671 [2024-05-13 20:47:49.385914] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:33.671 [2024-05-13 20:47:49.386424] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:35.588 [2024-05-13 20:47:51.520941] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:37.500 [2024-05-13 20:47:53.354447] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:38.884 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:38.884 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:38.884 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:38.884 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:38.884 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:38.884 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:38.884 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:38.884 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:38.884 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:38.884 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:38.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:38.884 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:39.144 20:47:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.404 20:47:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:39.404 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:39.404 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:39.404 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:39.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:39.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:39.405 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:39.405 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:39.405 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:39.405 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:39.405 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:39.405 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:39.405 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:39.405 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:39.405 ' 00:34:44.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:44.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:44.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:44.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:44.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:44.693 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:44.693 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:44.693 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:44.693 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:44.693 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:44.693 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:44.693 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:44.693 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:44.693 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3304957 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3304957 ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3304957 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3304957 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3304957' 00:34:44.693 killing process with pid 3304957 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3304957 00:34:44.693 [2024-05-13 20:48:00.294989] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3304957 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3304957 ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3304957 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3304957 ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3304957 00:34:44.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3304957) - No such process 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3304957 is not found' 00:34:44.693 Process with pid 3304957 is not found 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:44.693 00:34:44.693 real 0m15.536s 00:34:44.693 user 0m31.956s 00:34:44.693 sys 0m0.701s 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:44.693 20:48:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.693 ************************************ 00:34:44.693 END TEST spdkcli_nvmf_tcp 00:34:44.693 ************************************ 00:34:44.693 20:48:00 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:44.693 20:48:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:44.693 20:48:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:44.693 20:48:00 -- common/autotest_common.sh@10 -- # set +x 00:34:44.693 ************************************ 00:34:44.693 START TEST nvmf_identify_passthru 00:34:44.693 ************************************ 00:34:44.693 20:48:00 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:44.693 * Looking for test storage... 00:34:44.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:44.693 20:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.693 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.694 20:48:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.694 20:48:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.694 20:48:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:44.694 20:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.694 20:48:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.694 20:48:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.694 20:48:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:44.694 20:48:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.694 20:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.694 20:48:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:44.694 20:48:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:44.694 20:48:00 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:44.694 20:48:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:52.834 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:52.834 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.834 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:52.834 Found net devices under 0000:31:00.0: cvl_0_0 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:52.835 Found net devices under 0000:31:00.1: cvl_0_1 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.835 20:48:07 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:52.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:34:52.835 00:34:52.835 --- 10.0.0.2 ping statistics --- 00:34:52.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.835 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:34:52.835 00:34:52.835 --- 10.0.0.1 ping statistics --- 00:34:52.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.835 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:52.835 20:48:08 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:34:52.835 20:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:52.835 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:52.835 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.096 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:34:53.096 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:53.096 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:53.096 20:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:53.096 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.357 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:34:53.357 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:53.357 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.357 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.617 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:53.617 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:53.617 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.617 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3312734 00:34:53.617 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:53.618 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:53.618 20:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3312734 00:34:53.618 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3312734 ']' 00:34:53.618 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.618 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:53.618 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.618 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:53.618 20:48:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.618 [2024-05-13 20:48:09.359982] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:34:53.618 [2024-05-13 20:48:09.360038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.618 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.618 [2024-05-13 20:48:09.434099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:53.618 [2024-05-13 20:48:09.503094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.618 [2024-05-13 20:48:09.503133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.618 [2024-05-13 20:48:09.503140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.618 [2024-05-13 20:48:09.503146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.618 [2024-05-13 20:48:09.503152] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.618 [2024-05-13 20:48:09.503284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.618 [2024-05-13 20:48:09.503435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.618 [2024-05-13 20:48:09.503649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.618 [2024-05-13 20:48:09.503654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.559 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:54.560 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.560 INFO: Log level set to 20 00:34:54.560 INFO: Requests: 00:34:54.560 { 00:34:54.560 "jsonrpc": "2.0", 00:34:54.560 "method": "nvmf_set_config", 00:34:54.560 "id": 1, 00:34:54.560 "params": { 00:34:54.560 "admin_cmd_passthru": { 00:34:54.560 "identify_ctrlr": true 00:34:54.560 } 00:34:54.560 } 00:34:54.560 } 00:34:54.560 00:34:54.560 INFO: response: 00:34:54.560 { 00:34:54.560 "jsonrpc": "2.0", 00:34:54.560 "id": 1, 00:34:54.560 "result": true 00:34:54.560 } 00:34:54.560 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.560 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.560 INFO: Setting log level to 20 00:34:54.560 INFO: Setting log level to 20 00:34:54.560 INFO: Log level set to 20 00:34:54.560 INFO: Log level set to 20 00:34:54.560 INFO: Requests: 00:34:54.560 { 00:34:54.560 "jsonrpc": "2.0", 00:34:54.560 "method": "framework_start_init", 00:34:54.560 "id": 1 00:34:54.560 } 00:34:54.560 00:34:54.560 INFO: Requests: 00:34:54.560 { 00:34:54.560 "jsonrpc": "2.0", 00:34:54.560 "method": "framework_start_init", 00:34:54.560 "id": 1 00:34:54.560 } 00:34:54.560 00:34:54.560 [2024-05-13 20:48:10.218064] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:54.560 INFO: response: 00:34:54.560 { 00:34:54.560 "jsonrpc": "2.0", 00:34:54.560 "id": 1, 00:34:54.560 "result": true 00:34:54.560 } 00:34:54.560 00:34:54.560 INFO: response: 00:34:54.560 { 00:34:54.560 "jsonrpc": "2.0", 00:34:54.560 "id": 1, 00:34:54.560 "result": true 00:34:54.560 } 00:34:54.560 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.560 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.560 INFO: Setting log level to 40 00:34:54.560 INFO: Setting log level to 40 00:34:54.560 INFO: Setting log level to 40 00:34:54.560 [2024-05-13 20:48:10.231308] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.560 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.560 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.560 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.820 Nvme0n1 00:34:54.820 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.820 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:54.820 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.821 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.821 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.821 [2024-05-13 20:48:10.621401] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:54.821 [2024-05-13 20:48:10.621673] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.821 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.821 [ 00:34:54.821 { 00:34:54.821 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:54.821 "subtype": "Discovery", 00:34:54.821 "listen_addresses": [], 00:34:54.821 "allow_any_host": true, 00:34:54.821 "hosts": [] 00:34:54.821 }, 00:34:54.821 { 00:34:54.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:54.821 "subtype": "NVMe", 00:34:54.821 "listen_addresses": [ 00:34:54.821 { 00:34:54.821 "trtype": "TCP", 00:34:54.821 "adrfam": "IPv4", 00:34:54.821 "traddr": "10.0.0.2", 00:34:54.821 "trsvcid": "4420" 00:34:54.821 } 00:34:54.821 ], 00:34:54.821 "allow_any_host": true, 00:34:54.821 "hosts": [], 00:34:54.821 "serial_number": "SPDK00000000000001", 00:34:54.821 "model_number": "SPDK bdev Controller", 00:34:54.821 "max_namespaces": 1, 00:34:54.821 "min_cntlid": 1, 00:34:54.821 "max_cntlid": 65519, 00:34:54.821 "namespaces": [ 00:34:54.821 { 00:34:54.821 "nsid": 1, 00:34:54.821 "bdev_name": "Nvme0n1", 00:34:54.821 "name": "Nvme0n1", 00:34:54.821 "nguid": "36344730526054940025384500000021", 00:34:54.821 "uuid": "36344730-5260-5494-0025-384500000021" 00:34:54.821 } 00:34:54.821 ] 00:34:54.821 } 00:34:54.821 ] 00:34:54.821 20:48:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.821 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:54.821 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:54.821 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:54.821 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.081 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:34:55.081 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:55.081 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:55.081 20:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:55.081 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.342 20:48:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:34:55.342 20:48:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:34:55.342 20:48:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:34:55.342 20:48:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.342 20:48:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:55.342 20:48:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:55.342 rmmod nvme_tcp 00:34:55.342 rmmod nvme_fabrics 00:34:55.342 rmmod nvme_keyring 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3312734 ']' 00:34:55.342 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3312734 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3312734 ']' 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3312734 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:55.342 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3312734 00:34:55.602 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:55.602 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:55.602 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3312734' 00:34:55.602 killing process with pid 3312734 00:34:55.602 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3312734 00:34:55.602 [2024-05-13 20:48:11.313450] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:55.602 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3312734 00:34:55.864 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:55.864 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:55.864 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:55.864 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:55.864 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:55.864 20:48:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.864 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:55.864 20:48:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.776 20:48:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:57.776 00:34:57.776 real 0m13.142s 00:34:57.776 user 0m10.698s 00:34:57.776 sys 0m6.291s 00:34:57.776 20:48:13 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:57.776 20:48:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.776 ************************************ 00:34:57.776 END TEST nvmf_identify_passthru 00:34:57.776 ************************************ 00:34:57.776 20:48:13 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:57.776 20:48:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:57.776 20:48:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:57.776 20:48:13 -- common/autotest_common.sh@10 -- # set +x 00:34:58.036 ************************************ 00:34:58.036 START TEST nvmf_dif 00:34:58.036 ************************************ 00:34:58.036 20:48:13 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:58.036 * Looking for test storage... 00:34:58.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:58.036 20:48:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.036 20:48:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.036 20:48:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.036 20:48:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.036 20:48:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.036 20:48:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.036 20:48:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.036 20:48:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:58.036 20:48:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:58.036 20:48:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:58.036 20:48:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:58.036 20:48:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:58.036 20:48:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:58.036 20:48:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.036 20:48:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:58.036 20:48:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:58.036 20:48:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:58.036 20:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:06.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:06.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:06.184 Found net devices under 0000:31:00.0: cvl_0_0 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:06.184 Found net devices under 0000:31:00.1: cvl_0_1 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.184 20:48:21 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:06.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:35:06.184 00:35:06.184 --- 10.0.0.2 ping statistics --- 00:35:06.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.184 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:35:06.184 00:35:06.184 --- 10.0.0.1 ping statistics --- 00:35:06.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.184 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:06.184 20:48:22 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:10.388 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:10.388 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:10.388 20:48:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:10.388 20:48:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3319648 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3319648 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3319648 ']' 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.388 20:48:26 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:10.388 20:48:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.388 [2024-05-13 20:48:26.202140] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:35:10.388 [2024-05-13 20:48:26.202210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.388 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.388 [2024-05-13 20:48:26.280347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.649 [2024-05-13 20:48:26.355439] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.649 [2024-05-13 20:48:26.355477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.649 [2024-05-13 20:48:26.355484] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.649 [2024-05-13 20:48:26.355491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.649 [2024-05-13 20:48:26.355496] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.649 [2024-05-13 20:48:26.355515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.220 20:48:26 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:11.220 20:48:26 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:11.220 20:48:26 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:11.220 20:48:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.220 20:48:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:11.220 20:48:27 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.220 20:48:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:11.220 20:48:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:11.220 20:48:27 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.220 20:48:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:11.220 [2024-05-13 20:48:27.018262] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.220 20:48:27 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.220 20:48:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:11.220 20:48:27 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:11.220 20:48:27 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:11.220 20:48:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:11.220 ************************************ 00:35:11.220 START TEST fio_dif_1_default 00:35:11.220 ************************************ 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:11.220 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.221 bdev_null0 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:11.221 [2024-05-13 20:48:27.110442] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:11.221 [2024-05-13 20:48:27.110629] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:11.221 { 00:35:11.221 "params": { 00:35:11.221 "name": "Nvme$subsystem", 00:35:11.221 "trtype": "$TEST_TRANSPORT", 00:35:11.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.221 "adrfam": "ipv4", 00:35:11.221 "trsvcid": "$NVMF_PORT", 00:35:11.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.221 "hdgst": ${hdgst:-false}, 00:35:11.221 "ddgst": ${ddgst:-false} 00:35:11.221 }, 00:35:11.221 "method": "bdev_nvme_attach_controller" 00:35:11.221 } 00:35:11.221 EOF 00:35:11.221 )") 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:11.221 "params": { 00:35:11.221 "name": "Nvme0", 00:35:11.221 "trtype": "tcp", 00:35:11.221 "traddr": "10.0.0.2", 00:35:11.221 "adrfam": "ipv4", 00:35:11.221 "trsvcid": "4420", 00:35:11.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.221 "hdgst": false, 00:35:11.221 "ddgst": false 00:35:11.221 }, 00:35:11.221 "method": "bdev_nvme_attach_controller" 00:35:11.221 }' 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:11.221 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:11.506 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:11.506 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:11.506 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:11.506 20:48:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.776 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:11.776 fio-3.35 00:35:11.776 Starting 1 thread 00:35:11.776 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.016 00:35:24.016 filename0: (groupid=0, jobs=1): err= 0: pid=3320175: Mon May 13 20:48:38 2024 00:35:24.016 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10034msec) 00:35:24.016 slat (nsec): min=5548, max=31567, avg=6314.46, stdev=1571.80 00:35:24.016 clat (usec): min=40973, max=43061, avg=41965.49, stdev=178.46 00:35:24.016 lat (usec): min=40980, max=43066, avg=41971.81, stdev=178.46 00:35:24.016 clat percentiles (usec): 00:35:24.016 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:35:24.016 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:24.016 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:24.016 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:24.016 | 99.99th=[43254] 00:35:24.016 bw ( KiB/s): min= 352, max= 384, per=99.71%, avg=380.80, stdev= 9.85, samples=20 00:35:24.016 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:24.016 lat (msec) : 50=100.00% 00:35:24.016 cpu : usr=95.54%, sys=4.27%, ctx=14, majf=0, minf=213 00:35:24.016 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.016 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.016 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:24.016 00:35:24.016 Run status group 0 (all jobs): 00:35:24.016 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10034-10034msec 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 00:35:24.016 real 0m11.143s 00:35:24.016 user 0m24.530s 00:35:24.016 sys 0m0.716s 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 ************************************ 00:35:24.016 END TEST fio_dif_1_default 00:35:24.016 ************************************ 00:35:24.016 20:48:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:24.016 20:48:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:24.016 20:48:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 ************************************ 00:35:24.016 START TEST fio_dif_1_multi_subsystems 00:35:24.016 ************************************ 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 bdev_null0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 [2024-05-13 20:48:38.340968] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 bdev_null1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:24.016 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:24.017 { 00:35:24.017 "params": { 00:35:24.017 "name": "Nvme$subsystem", 00:35:24.017 "trtype": "$TEST_TRANSPORT", 00:35:24.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.017 "adrfam": "ipv4", 00:35:24.017 "trsvcid": "$NVMF_PORT", 00:35:24.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.017 "hdgst": ${hdgst:-false}, 00:35:24.017 "ddgst": ${ddgst:-false} 00:35:24.017 }, 00:35:24.017 "method": "bdev_nvme_attach_controller" 00:35:24.017 } 00:35:24.017 EOF 00:35:24.017 )") 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:24.017 { 00:35:24.017 "params": { 00:35:24.017 "name": "Nvme$subsystem", 00:35:24.017 "trtype": "$TEST_TRANSPORT", 00:35:24.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.017 "adrfam": "ipv4", 00:35:24.017 "trsvcid": "$NVMF_PORT", 00:35:24.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.017 "hdgst": ${hdgst:-false}, 00:35:24.017 "ddgst": ${ddgst:-false} 00:35:24.017 }, 00:35:24.017 "method": "bdev_nvme_attach_controller" 00:35:24.017 } 00:35:24.017 EOF 00:35:24.017 )") 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:24.017 "params": { 00:35:24.017 "name": "Nvme0", 00:35:24.017 "trtype": "tcp", 00:35:24.017 "traddr": "10.0.0.2", 00:35:24.017 "adrfam": "ipv4", 00:35:24.017 "trsvcid": "4420", 00:35:24.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.017 "hdgst": false, 00:35:24.017 "ddgst": false 00:35:24.017 }, 00:35:24.017 "method": "bdev_nvme_attach_controller" 00:35:24.017 },{ 00:35:24.017 "params": { 00:35:24.017 "name": "Nvme1", 00:35:24.017 "trtype": "tcp", 00:35:24.017 "traddr": "10.0.0.2", 00:35:24.017 "adrfam": "ipv4", 00:35:24.017 "trsvcid": "4420", 00:35:24.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:24.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:24.017 "hdgst": false, 00:35:24.017 "ddgst": false 00:35:24.017 }, 00:35:24.017 "method": "bdev_nvme_attach_controller" 00:35:24.017 }' 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:24.017 20:48:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.017 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:24.017 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:24.017 fio-3.35 00:35:24.017 Starting 2 threads 00:35:24.017 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.038 00:35:34.038 filename0: (groupid=0, jobs=1): err= 0: pid=3322481: Mon May 13 20:48:49 2024 00:35:34.038 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10020msec) 00:35:34.038 slat (nsec): min=5569, max=68659, avg=6883.94, stdev=2715.41 00:35:34.038 clat (usec): min=40914, max=44194, avg=41901.93, stdev=322.29 00:35:34.038 lat (usec): min=40922, max=44234, avg=41908.82, stdev=322.68 00:35:34.038 clat percentiles (usec): 00:35:34.038 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:35:34.038 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:34.038 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:34.038 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:35:34.039 | 99.99th=[44303] 00:35:34.039 bw ( KiB/s): min= 352, max= 384, per=49.27%, avg=380.80, stdev= 9.85, samples=20 00:35:34.039 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:34.039 lat (msec) : 50=100.00% 00:35:34.039 cpu : usr=96.91%, sys=2.86%, ctx=16, majf=0, minf=188 00:35:34.039 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.039 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.039 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:34.039 filename1: (groupid=0, jobs=1): err= 0: pid=3322482: Mon May 13 20:48:49 2024 00:35:34.039 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:35:34.039 slat (nsec): min=5575, max=35964, avg=6822.10, stdev=1980.00 00:35:34.039 clat (usec): min=40827, max=42186, avg=41022.71, stdev=201.14 00:35:34.039 lat (usec): min=40833, max=42222, avg=41029.53, stdev=201.61 00:35:34.039 clat percentiles (usec): 00:35:34.039 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:34.039 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:34.039 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:34.039 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:34.039 | 99.99th=[42206] 00:35:34.039 bw ( KiB/s): min= 384, max= 416, per=50.31%, avg=388.80, stdev=11.72, samples=20 00:35:34.039 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:34.039 lat (msec) : 50=100.00% 00:35:34.039 cpu : usr=97.16%, sys=2.61%, ctx=14, majf=0, minf=125 00:35:34.039 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.039 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.039 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:34.039 00:35:34.039 Run status group 0 (all jobs): 00:35:34.039 READ: bw=771KiB/s (790kB/s), 382KiB/s-390KiB/s (391kB/s-399kB/s), io=7728KiB (7913kB), run=10015-10020msec 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 00:35:34.039 real 0m11.482s 00:35:34.039 user 0m35.500s 00:35:34.039 sys 0m0.909s 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 ************************************ 00:35:34.039 END TEST fio_dif_1_multi_subsystems 00:35:34.039 ************************************ 00:35:34.039 20:48:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:34.039 20:48:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:34.039 20:48:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 ************************************ 00:35:34.039 START TEST fio_dif_rand_params 00:35:34.039 ************************************ 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 bdev_null0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.039 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.040 [2024-05-13 20:48:49.911684] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.040 { 00:35:34.040 "params": { 00:35:34.040 "name": "Nvme$subsystem", 00:35:34.040 "trtype": "$TEST_TRANSPORT", 00:35:34.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.040 "adrfam": "ipv4", 00:35:34.040 "trsvcid": "$NVMF_PORT", 00:35:34.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.040 "hdgst": ${hdgst:-false}, 00:35:34.040 "ddgst": ${ddgst:-false} 00:35:34.040 }, 00:35:34.040 "method": "bdev_nvme_attach_controller" 00:35:34.040 } 00:35:34.040 EOF 00:35:34.040 )") 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.040 "params": { 00:35:34.040 "name": "Nvme0", 00:35:34.040 "trtype": "tcp", 00:35:34.040 "traddr": "10.0.0.2", 00:35:34.040 "adrfam": "ipv4", 00:35:34.040 "trsvcid": "4420", 00:35:34.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.040 "hdgst": false, 00:35:34.040 "ddgst": false 00:35:34.040 }, 00:35:34.040 "method": "bdev_nvme_attach_controller" 00:35:34.040 }' 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:34.040 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:34.374 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:34.374 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:34.374 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:34.374 20:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.659 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:34.659 ... 00:35:34.659 fio-3.35 00:35:34.659 Starting 3 threads 00:35:34.659 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.950 00:35:39.950 filename0: (groupid=0, jobs=1): err= 0: pid=3324841: Mon May 13 20:48:55 2024 00:35:39.950 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(120MiB/5007msec) 00:35:39.950 slat (nsec): min=8137, max=31629, avg=8834.31, stdev=1360.60 00:35:39.950 clat (usec): min=6140, max=92656, avg=15631.62, stdev=14325.38 00:35:39.950 lat (usec): min=6148, max=92665, avg=15640.45, stdev=14325.34 00:35:39.950 clat percentiles (usec): 00:35:39.950 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8717], 00:35:39.950 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10814], 60.00th=[11338], 00:35:39.950 | 70.00th=[12125], 80.00th=[13435], 90.00th=[49546], 95.00th=[51119], 00:35:39.950 | 99.00th=[54789], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:35:39.950 | 99.99th=[92799] 00:35:39.950 bw ( KiB/s): min=14336, max=35584, per=33.93%, avg=24524.80, stdev=6771.83, samples=10 00:35:39.950 iops : min= 112, max= 278, avg=191.60, stdev=52.90, samples=10 00:35:39.950 lat (msec) : 10=37.92%, 20=49.79%, 50=3.65%, 100=8.65% 00:35:39.950 cpu : usr=96.20%, sys=3.56%, ctx=10, majf=0, minf=75 00:35:39.950 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.950 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.950 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:39.950 filename0: (groupid=0, jobs=1): err= 0: pid=3324842: Mon May 13 20:48:55 2024 00:35:39.950 read: IOPS=173, BW=21.6MiB/s (22.7MB/s)(109MiB/5045msec) 00:35:39.950 slat (nsec): min=5606, max=31464, avg=8509.07, stdev=1618.85 00:35:39.950 clat (usec): min=5702, max=92790, avg=17275.85, stdev=16597.47 00:35:39.950 lat (usec): min=5711, max=92799, avg=17284.36, stdev=16597.47 00:35:39.950 clat percentiles (usec): 00:35:39.950 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 8586], 00:35:39.950 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10945], 60.00th=[11863], 00:35:39.950 | 70.00th=[12911], 80.00th=[14877], 90.00th=[50594], 95.00th=[52167], 00:35:39.950 | 99.00th=[90702], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:35:39.950 | 99.99th=[92799] 00:35:39.950 bw ( KiB/s): min=13824, max=33280, per=30.85%, avg=22297.60, stdev=6622.49, samples=10 00:35:39.950 iops : min= 108, max= 260, avg=174.20, stdev=51.74, samples=10 00:35:39.950 lat (msec) : 10=37.80%, 20=46.74%, 50=4.47%, 100=11.00% 00:35:39.950 cpu : usr=96.49%, sys=3.27%, ctx=9, majf=0, minf=84 00:35:39.950 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.950 issued rwts: total=873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.950 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:39.950 filename0: (groupid=0, jobs=1): err= 0: pid=3324843: Mon May 13 20:48:55 2024 00:35:39.950 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(127MiB/5029msec) 00:35:39.950 slat (nsec): min=5666, max=31689, avg=8282.27, stdev=1589.96 00:35:39.950 clat (usec): min=5755, max=94093, avg=14835.88, stdev=14665.88 00:35:39.950 lat (usec): min=5764, max=94102, avg=14844.16, stdev=14665.92 00:35:39.950 clat percentiles (usec): 00:35:39.950 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 8455], 00:35:39.950 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11207], 00:35:39.950 | 70.00th=[11994], 80.00th=[13304], 90.00th=[17171], 95.00th=[51643], 00:35:39.950 | 99.00th=[90702], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:35:39.950 | 99.99th=[93848] 00:35:39.950 bw ( KiB/s): min=10752, max=37376, per=35.88%, avg=25932.80, stdev=7169.07, samples=10 00:35:39.950 iops : min= 84, max= 292, avg=202.60, stdev=56.01, samples=10 00:35:39.950 lat (msec) : 10=44.09%, 20=45.96%, 50=2.66%, 100=7.28% 00:35:39.950 cpu : usr=95.90%, sys=3.84%, ctx=10, majf=0, minf=110 00:35:39.950 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.950 issued rwts: total=1016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.950 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:39.950 00:35:39.950 Run status group 0 (all jobs): 00:35:39.950 READ: bw=70.6MiB/s (74.0MB/s), 21.6MiB/s-25.3MiB/s (22.7MB/s-26.5MB/s), io=356MiB (373MB), run=5007-5045msec 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 bdev_null0 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 [2024-05-13 20:48:56.022386] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 bdev_null1 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 bdev_null2 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.212 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.213 { 00:35:40.213 "params": { 00:35:40.213 "name": "Nvme$subsystem", 00:35:40.213 "trtype": "$TEST_TRANSPORT", 00:35:40.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.213 "adrfam": "ipv4", 00:35:40.213 "trsvcid": "$NVMF_PORT", 00:35:40.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.213 "hdgst": ${hdgst:-false}, 00:35:40.213 "ddgst": ${ddgst:-false} 00:35:40.213 }, 00:35:40.213 "method": "bdev_nvme_attach_controller" 00:35:40.213 } 00:35:40.213 EOF 00:35:40.213 )") 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.213 { 00:35:40.213 "params": { 00:35:40.213 "name": "Nvme$subsystem", 00:35:40.213 "trtype": "$TEST_TRANSPORT", 00:35:40.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.213 "adrfam": "ipv4", 00:35:40.213 "trsvcid": "$NVMF_PORT", 00:35:40.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.213 "hdgst": ${hdgst:-false}, 00:35:40.213 "ddgst": ${ddgst:-false} 00:35:40.213 }, 00:35:40.213 "method": "bdev_nvme_attach_controller" 00:35:40.213 } 00:35:40.213 EOF 00:35:40.213 )") 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.213 { 00:35:40.213 "params": { 00:35:40.213 "name": "Nvme$subsystem", 00:35:40.213 "trtype": "$TEST_TRANSPORT", 00:35:40.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.213 "adrfam": "ipv4", 00:35:40.213 "trsvcid": "$NVMF_PORT", 00:35:40.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.213 "hdgst": ${hdgst:-false}, 00:35:40.213 "ddgst": ${ddgst:-false} 00:35:40.213 }, 00:35:40.213 "method": "bdev_nvme_attach_controller" 00:35:40.213 } 00:35:40.213 EOF 00:35:40.213 )") 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:40.213 20:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.213 "params": { 00:35:40.213 "name": "Nvme0", 00:35:40.213 "trtype": "tcp", 00:35:40.213 "traddr": "10.0.0.2", 00:35:40.213 "adrfam": "ipv4", 00:35:40.213 "trsvcid": "4420", 00:35:40.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.213 "hdgst": false, 00:35:40.213 "ddgst": false 00:35:40.213 }, 00:35:40.213 "method": "bdev_nvme_attach_controller" 00:35:40.213 },{ 00:35:40.213 "params": { 00:35:40.213 "name": "Nvme1", 00:35:40.213 "trtype": "tcp", 00:35:40.213 "traddr": "10.0.0.2", 00:35:40.213 "adrfam": "ipv4", 00:35:40.213 "trsvcid": "4420", 00:35:40.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.213 "hdgst": false, 00:35:40.213 "ddgst": false 00:35:40.213 }, 00:35:40.213 "method": "bdev_nvme_attach_controller" 00:35:40.213 },{ 00:35:40.213 "params": { 00:35:40.213 "name": "Nvme2", 00:35:40.213 "trtype": "tcp", 00:35:40.213 "traddr": "10.0.0.2", 00:35:40.213 "adrfam": "ipv4", 00:35:40.213 "trsvcid": "4420", 00:35:40.213 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:40.213 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:40.213 "hdgst": false, 00:35:40.213 "ddgst": false 00:35:40.213 }, 00:35:40.213 "method": "bdev_nvme_attach_controller" 00:35:40.213 }' 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:40.495 20:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.760 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:40.760 ... 00:35:40.760 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:40.760 ... 00:35:40.760 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:40.760 ... 00:35:40.760 fio-3.35 00:35:40.760 Starting 24 threads 00:35:40.760 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.993 00:35:52.993 filename0: (groupid=0, jobs=1): err= 0: pid=3326186: Mon May 13 20:49:07 2024 00:35:52.993 read: IOPS=514, BW=2058KiB/s (2108kB/s)(20.1MiB/10012msec) 00:35:52.993 slat (usec): min=5, max=109, avg=15.01, stdev=12.61 00:35:52.993 clat (usec): min=3908, max=33327, avg=30975.01, stdev=3148.75 00:35:52.993 lat (usec): min=3922, max=33336, avg=30990.02, stdev=3148.38 00:35:52.993 clat percentiles (usec): 00:35:52.993 | 1.00th=[ 8160], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.993 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.993 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.993 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:35:52.993 | 99.99th=[33424] 00:35:52.993 bw ( KiB/s): min= 1920, max= 2432, per=4.22%, avg=2054.60, stdev=113.53, samples=20 00:35:52.993 iops : min= 480, max= 608, avg=513.65, stdev=28.38, samples=20 00:35:52.993 lat (msec) : 4=0.08%, 10=1.13%, 20=0.52%, 50=98.27% 00:35:52.993 cpu : usr=98.49%, sys=0.84%, ctx=82, majf=0, minf=22 00:35:52.993 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.993 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.993 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.993 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326187: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=506, BW=2027KiB/s (2076kB/s)(19.8MiB/10004msec) 00:35:52.994 slat (nsec): min=5597, max=88855, avg=16289.00, stdev=12565.22 00:35:52.994 clat (usec): min=6111, max=59157, avg=31470.03, stdev=4053.43 00:35:52.994 lat (usec): min=6117, max=59174, avg=31486.32, stdev=4052.79 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[17695], 5.00th=[25297], 10.00th=[29754], 20.00th=[31065], 00:35:52.994 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:35:52.994 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[37487], 00:35:52.994 | 99.00th=[45351], 99.50th=[47449], 99.90th=[58983], 99.95th=[58983], 00:35:52.994 | 99.99th=[58983] 00:35:52.994 bw ( KiB/s): min= 1792, max= 2112, per=4.14%, avg=2016.84, stdev=69.23, samples=19 00:35:52.994 iops : min= 448, max= 528, avg=504.21, stdev=17.31, samples=19 00:35:52.994 lat (msec) : 10=0.36%, 20=1.34%, 50=97.99%, 100=0.32% 00:35:52.994 cpu : usr=99.14%, sys=0.55%, ctx=39, majf=0, minf=15 00:35:52.994 IO depths : 1=2.2%, 2=4.6%, 4=11.6%, 8=68.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:35:52.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 complete : 0=0.0%, 4=91.3%, 8=5.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 issued rwts: total=5070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326188: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.7MiB/10007msec) 00:35:52.994 slat (nsec): min=5710, max=77745, avg=20901.27, stdev=12946.31 00:35:52.994 clat (usec): min=10569, max=60086, avg=31511.07, stdev=2217.90 00:35:52.994 lat (usec): min=10575, max=60099, avg=31531.97, stdev=2217.98 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:35:52.994 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.994 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:35:52.994 | 99.00th=[40633], 99.50th=[42730], 99.90th=[51119], 99.95th=[51119], 00:35:52.994 | 99.99th=[60031] 00:35:52.994 bw ( KiB/s): min= 1872, max= 2048, per=4.13%, avg=2011.79, stdev=63.07, samples=19 00:35:52.994 iops : min= 468, max= 512, avg=502.95, stdev=15.77, samples=19 00:35:52.994 lat (msec) : 20=0.51%, 50=99.15%, 100=0.34% 00:35:52.994 cpu : usr=98.12%, sys=0.99%, ctx=37, majf=0, minf=16 00:35:52.994 IO depths : 1=5.8%, 2=11.7%, 4=23.8%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:52.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326189: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10018msec) 00:35:52.994 slat (nsec): min=5731, max=80903, avg=18405.49, stdev=12298.02 00:35:52.994 clat (usec): min=12115, max=56275, avg=31759.63, stdev=4044.34 00:35:52.994 lat (usec): min=12132, max=56282, avg=31778.03, stdev=4044.29 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[18744], 5.00th=[24773], 10.00th=[30540], 20.00th=[31065], 00:35:52.994 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.994 | 70.00th=[31589], 80.00th=[32113], 90.00th=[33817], 95.00th=[39584], 00:35:52.994 | 99.00th=[45351], 99.50th=[46400], 99.90th=[54789], 99.95th=[56361], 00:35:52.994 | 99.99th=[56361] 00:35:52.994 bw ( KiB/s): min= 1872, max= 2192, per=4.12%, avg=2006.15, stdev=87.59, samples=20 00:35:52.994 iops : min= 468, max= 548, avg=501.50, stdev=21.86, samples=20 00:35:52.994 lat (msec) : 20=1.55%, 50=98.29%, 100=0.16% 00:35:52.994 cpu : usr=99.17%, sys=0.55%, ctx=10, majf=0, minf=19 00:35:52.994 IO depths : 1=3.8%, 2=8.2%, 4=19.0%, 8=59.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:52.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 complete : 0=0.0%, 4=92.8%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326190: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=507, BW=2029KiB/s (2077kB/s)(19.8MiB/10001msec) 00:35:52.994 slat (nsec): min=5778, max=78130, avg=18172.01, stdev=11384.10 00:35:52.994 clat (usec): min=17873, max=33291, avg=31389.25, stdev=939.53 00:35:52.994 lat (usec): min=17882, max=33299, avg=31407.42, stdev=938.71 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:35:52.994 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.994 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32637], 00:35:52.994 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:35:52.994 | 99.99th=[33162] 00:35:52.994 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=2027.53, stdev=47.85, samples=19 00:35:52.994 iops : min= 480, max= 512, avg=506.84, stdev=11.95, samples=19 00:35:52.994 lat (msec) : 20=0.32%, 50=99.68% 00:35:52.994 cpu : usr=99.26%, sys=0.46%, ctx=9, majf=0, minf=20 00:35:52.994 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326191: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=504, BW=2020KiB/s (2068kB/s)(19.7MiB/10002msec) 00:35:52.994 slat (nsec): min=5739, max=77844, avg=16856.37, stdev=11600.98 00:35:52.994 clat (usec): min=17985, max=77916, avg=31570.61, stdev=3425.77 00:35:52.994 lat (usec): min=17997, max=77948, avg=31587.46, stdev=3425.69 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[21627], 5.00th=[26346], 10.00th=[30540], 20.00th=[31065], 00:35:52.994 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.994 | 70.00th=[31589], 80.00th=[32113], 90.00th=[32375], 95.00th=[35390], 00:35:52.994 | 99.00th=[44827], 99.50th=[47973], 99.90th=[62129], 99.95th=[78119], 00:35:52.994 | 99.99th=[78119] 00:35:52.994 bw ( KiB/s): min= 1792, max= 2160, per=4.15%, avg=2018.05, stdev=85.38, samples=19 00:35:52.994 iops : min= 448, max= 540, avg=504.47, stdev=21.33, samples=19 00:35:52.994 lat (msec) : 20=0.50%, 50=99.17%, 100=0.34% 00:35:52.994 cpu : usr=97.93%, sys=1.21%, ctx=50, majf=0, minf=22 00:35:52.994 IO depths : 1=3.1%, 2=6.5%, 4=15.6%, 8=63.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:52.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 complete : 0=0.0%, 4=92.2%, 8=3.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326192: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=507, BW=2029KiB/s (2078kB/s)(19.8MiB/10003msec) 00:35:52.994 slat (nsec): min=5669, max=83647, avg=19610.30, stdev=14740.21 00:35:52.994 clat (usec): min=6470, max=57912, avg=31389.93, stdev=3432.77 00:35:52.994 lat (usec): min=6476, max=57929, avg=31409.55, stdev=3432.29 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[21890], 5.00th=[26084], 10.00th=[29230], 20.00th=[30802], 00:35:52.994 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.994 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32900], 95.00th=[36963], 00:35:52.994 | 99.00th=[42730], 99.50th=[48497], 99.90th=[57934], 99.95th=[57934], 00:35:52.994 | 99.99th=[57934] 00:35:52.994 bw ( KiB/s): min= 1795, max= 2112, per=4.16%, avg=2023.74, stdev=65.25, samples=19 00:35:52.994 iops : min= 448, max= 528, avg=505.89, stdev=16.46, samples=19 00:35:52.994 lat (msec) : 10=0.06%, 20=0.69%, 50=98.94%, 100=0.32% 00:35:52.994 cpu : usr=97.90%, sys=1.16%, ctx=40, majf=0, minf=21 00:35:52.994 IO depths : 1=2.6%, 2=6.1%, 4=15.5%, 8=64.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:52.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 complete : 0=0.0%, 4=92.0%, 8=4.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.994 issued rwts: total=5074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.994 filename0: (groupid=0, jobs=1): err= 0: pid=3326193: Mon May 13 20:49:07 2024 00:35:52.994 read: IOPS=507, BW=2029KiB/s (2077kB/s)(19.8MiB/10001msec) 00:35:52.994 slat (nsec): min=5777, max=71608, avg=19546.91, stdev=11496.67 00:35:52.994 clat (usec): min=11772, max=48963, avg=31368.80, stdev=1220.29 00:35:52.994 lat (usec): min=11781, max=48988, avg=31388.35, stdev=1220.20 00:35:52.994 clat percentiles (usec): 00:35:52.994 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:35:52.994 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31327], 00:35:52.994 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:35:52.994 | 99.00th=[32900], 99.50th=[33162], 99.90th=[42206], 99.95th=[45351], 00:35:52.994 | 99.99th=[49021] 00:35:52.995 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=2027.53, stdev=47.85, samples=19 00:35:52.995 iops : min= 480, max= 512, avg=506.84, stdev=11.95, samples=19 00:35:52.995 lat (msec) : 20=0.32%, 50=99.68% 00:35:52.995 cpu : usr=98.11%, sys=1.06%, ctx=58, majf=0, minf=26 00:35:52.995 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.995 filename1: (groupid=0, jobs=1): err= 0: pid=3326194: Mon May 13 20:49:07 2024 00:35:52.995 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10009msec) 00:35:52.995 slat (nsec): min=5729, max=83126, avg=11289.63, stdev=7622.18 00:35:52.995 clat (usec): min=18024, max=50976, avg=31380.90, stdev=1830.96 00:35:52.995 lat (usec): min=18033, max=50986, avg=31392.19, stdev=1831.19 00:35:52.995 clat percentiles (usec): 00:35:52.995 | 1.00th=[19792], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:35:52.995 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.995 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:35:52.995 | 99.00th=[33424], 99.50th=[36439], 99.90th=[44827], 99.95th=[44827], 00:35:52.995 | 99.99th=[51119] 00:35:52.995 bw ( KiB/s): min= 1920, max= 2171, per=4.18%, avg=2034.26, stdev=58.07, samples=19 00:35:52.995 iops : min= 480, max= 542, avg=508.53, stdev=14.42, samples=19 00:35:52.995 lat (msec) : 20=1.10%, 50=98.86%, 100=0.04% 00:35:52.995 cpu : usr=99.12%, sys=0.59%, ctx=9, majf=0, minf=40 00:35:52.995 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:52.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.995 filename1: (groupid=0, jobs=1): err= 0: pid=3326195: Mon May 13 20:49:07 2024 00:35:52.995 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.1MiB/10029msec) 00:35:52.995 slat (nsec): min=5802, max=90928, avg=14755.17, stdev=11195.99 00:35:52.995 clat (usec): min=4476, max=39950, avg=31024.02, stdev=3184.66 00:35:52.995 lat (usec): min=4492, max=39957, avg=31038.78, stdev=3184.24 00:35:52.995 clat percentiles (usec): 00:35:52.995 | 1.00th=[ 9110], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.995 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.995 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.995 | 99.00th=[33162], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:35:52.995 | 99.99th=[40109] 00:35:52.995 bw ( KiB/s): min= 1916, max= 2560, per=4.22%, avg=2054.53, stdev=131.56, samples=19 00:35:52.995 iops : min= 479, max= 640, avg=513.63, stdev=32.89, samples=19 00:35:52.995 lat (msec) : 10=1.20%, 20=0.66%, 50=98.14% 00:35:52.995 cpu : usr=97.98%, sys=1.07%, ctx=84, majf=0, minf=25 00:35:52.995 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.995 filename1: (groupid=0, jobs=1): err= 0: pid=3326196: Mon May 13 20:49:07 2024 00:35:52.995 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10013msec) 00:35:52.995 slat (nsec): min=5731, max=90283, avg=23954.08, stdev=15143.92 00:35:52.995 clat (usec): min=23585, max=46534, avg=31359.22, stdev=1144.27 00:35:52.995 lat (usec): min=23594, max=46544, avg=31383.18, stdev=1143.54 00:35:52.995 clat percentiles (usec): 00:35:52.995 | 1.00th=[29230], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.995 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.995 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.995 | 99.00th=[33162], 99.50th=[34341], 99.90th=[42206], 99.95th=[42730], 00:35:52.995 | 99.99th=[46400] 00:35:52.995 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=2027.68, stdev=47.48, samples=19 00:35:52.995 iops : min= 480, max= 512, avg=506.84, stdev=11.95, samples=19 00:35:52.995 lat (msec) : 50=100.00% 00:35:52.995 cpu : usr=98.87%, sys=0.72%, ctx=134, majf=0, minf=25 00:35:52.995 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.995 filename1: (groupid=0, jobs=1): err= 0: pid=3326197: Mon May 13 20:49:07 2024 00:35:52.995 read: IOPS=507, BW=2031KiB/s (2080kB/s)(19.8MiB/10002msec) 00:35:52.995 slat (nsec): min=5619, max=93835, avg=17629.29, stdev=14022.40 00:35:52.995 clat (usec): min=2163, max=57495, avg=31442.71, stdev=3802.61 00:35:52.995 lat (usec): min=2169, max=57512, avg=31460.34, stdev=3802.25 00:35:52.995 clat percentiles (usec): 00:35:52.995 | 1.00th=[19268], 5.00th=[25560], 10.00th=[29754], 20.00th=[30802], 00:35:52.995 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:35:52.995 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[36963], 00:35:52.995 | 99.00th=[45876], 99.50th=[48497], 99.90th=[57410], 99.95th=[57410], 00:35:52.995 | 99.99th=[57410] 00:35:52.995 bw ( KiB/s): min= 1843, max= 2112, per=4.15%, avg=2022.89, stdev=67.92, samples=19 00:35:52.995 iops : min= 460, max= 528, avg=505.68, stdev=17.09, samples=19 00:35:52.995 lat (msec) : 4=0.12%, 10=0.12%, 20=1.08%, 50=98.37%, 100=0.32% 00:35:52.995 cpu : usr=99.18%, sys=0.54%, ctx=9, majf=0, minf=27 00:35:52.995 IO depths : 1=0.2%, 2=0.8%, 4=4.3%, 8=77.9%, 16=16.8%, 32=0.0%, >=64=0.0% 00:35:52.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 complete : 0=0.0%, 4=89.9%, 8=8.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.995 filename1: (groupid=0, jobs=1): err= 0: pid=3326198: Mon May 13 20:49:07 2024 00:35:52.995 read: IOPS=507, BW=2028KiB/s (2077kB/s)(19.8MiB/10002msec) 00:35:52.995 slat (nsec): min=5730, max=82223, avg=12828.27, stdev=9787.42 00:35:52.995 clat (usec): min=2747, max=57161, avg=31449.30, stdev=2311.23 00:35:52.995 lat (usec): min=2754, max=57183, avg=31462.13, stdev=2311.54 00:35:52.995 clat percentiles (usec): 00:35:52.995 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.995 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:35:52.995 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32637], 00:35:52.995 | 99.00th=[33424], 99.50th=[34341], 99.90th=[56886], 99.95th=[56886], 00:35:52.995 | 99.99th=[57410] 00:35:52.995 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=2020.21, stdev=49.41, samples=19 00:35:52.995 iops : min= 480, max= 512, avg=505.05, stdev=12.35, samples=19 00:35:52.995 lat (msec) : 4=0.04%, 10=0.12%, 20=0.47%, 50=99.05%, 100=0.32% 00:35:52.995 cpu : usr=99.12%, sys=0.51%, ctx=71, majf=0, minf=27 00:35:52.995 IO depths : 1=0.4%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:52.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.995 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.995 filename1: (groupid=0, jobs=1): err= 0: pid=3326199: Mon May 13 20:49:07 2024 00:35:52.995 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10002msec) 00:35:52.996 slat (nsec): min=5693, max=90534, avg=16887.54, stdev=11677.22 00:35:52.996 clat (usec): min=3410, max=68338, avg=32120.47, stdev=4915.46 00:35:52.996 lat (usec): min=3416, max=68355, avg=32137.36, stdev=4914.99 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[18220], 5.00th=[25822], 10.00th=[30540], 20.00th=[31065], 00:35:52.996 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.996 | 70.00th=[31851], 80.00th=[32113], 90.00th=[36439], 95.00th=[43779], 00:35:52.996 | 99.00th=[45876], 99.50th=[50070], 99.90th=[68682], 99.95th=[68682], 00:35:52.996 | 99.99th=[68682] 00:35:52.996 bw ( KiB/s): min= 1632, max= 2160, per=4.06%, avg=1974.05, stdev=140.51, samples=19 00:35:52.996 iops : min= 408, max= 540, avg=493.47, stdev=35.18, samples=19 00:35:52.996 lat (msec) : 4=0.04%, 10=0.28%, 20=0.85%, 50=98.39%, 100=0.44% 00:35:52.996 cpu : usr=98.01%, sys=1.07%, ctx=59, majf=0, minf=17 00:35:52.996 IO depths : 1=4.1%, 2=8.2%, 4=18.7%, 8=59.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:52.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.996 filename1: (groupid=0, jobs=1): err= 0: pid=3326200: Mon May 13 20:49:07 2024 00:35:52.996 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10009msec) 00:35:52.996 slat (nsec): min=5590, max=68526, avg=15532.02, stdev=9387.04 00:35:52.996 clat (usec): min=10160, max=53682, avg=31336.96, stdev=3084.62 00:35:52.996 lat (usec): min=10167, max=53705, avg=31352.49, stdev=3085.17 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[19530], 5.00th=[29754], 10.00th=[30540], 20.00th=[31065], 00:35:52.996 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.996 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32375], 95.00th=[32637], 00:35:52.996 | 99.00th=[44303], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:35:52.996 | 99.99th=[53740] 00:35:52.996 bw ( KiB/s): min= 1904, max= 2112, per=4.15%, avg=2020.42, stdev=55.76, samples=19 00:35:52.996 iops : min= 476, max= 528, avg=505.11, stdev=13.94, samples=19 00:35:52.996 lat (msec) : 20=1.08%, 50=98.60%, 100=0.31% 00:35:52.996 cpu : usr=99.11%, sys=0.59%, ctx=15, majf=0, minf=23 00:35:52.996 IO depths : 1=4.7%, 2=9.4%, 4=20.2%, 8=57.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:52.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 complete : 0=0.0%, 4=93.1%, 8=2.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.996 filename1: (groupid=0, jobs=1): err= 0: pid=3326201: Mon May 13 20:49:07 2024 00:35:52.996 read: IOPS=506, BW=2026KiB/s (2074kB/s)(19.8MiB/10015msec) 00:35:52.996 slat (usec): min=5, max=106, avg=21.39, stdev=13.53 00:35:52.996 clat (usec): min=22128, max=53786, avg=31400.96, stdev=1243.57 00:35:52.996 lat (usec): min=22144, max=53802, avg=31422.35, stdev=1242.73 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.996 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.996 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.996 | 99.00th=[33424], 99.50th=[34341], 99.90th=[44303], 99.95th=[53740], 00:35:52.996 | 99.99th=[53740] 00:35:52.996 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=2022.30, stdev=52.11, samples=20 00:35:52.996 iops : min= 480, max= 512, avg=505.50, stdev=13.09, samples=20 00:35:52.996 lat (msec) : 50=99.94%, 100=0.06% 00:35:52.996 cpu : usr=99.00%, sys=0.66%, ctx=29, majf=0, minf=19 00:35:52.996 IO depths : 1=6.2%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.996 filename2: (groupid=0, jobs=1): err= 0: pid=3326202: Mon May 13 20:49:07 2024 00:35:52.996 read: IOPS=506, BW=2027KiB/s (2076kB/s)(19.8MiB/10008msec) 00:35:52.996 slat (nsec): min=5768, max=81988, avg=18463.98, stdev=13067.71 00:35:52.996 clat (usec): min=16433, max=42935, avg=31412.28, stdev=1271.94 00:35:52.996 lat (usec): min=16440, max=42953, avg=31430.75, stdev=1271.12 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.996 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.996 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.996 | 99.00th=[32900], 99.50th=[33424], 99.90th=[42730], 99.95th=[42730], 00:35:52.996 | 99.99th=[42730] 00:35:52.996 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2027.53, stdev=64.11, samples=19 00:35:52.996 iops : min= 480, max= 544, avg=506.84, stdev=16.02, samples=19 00:35:52.996 lat (msec) : 20=0.32%, 50=99.68% 00:35:52.996 cpu : usr=99.21%, sys=0.50%, ctx=10, majf=0, minf=22 00:35:52.996 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.996 filename2: (groupid=0, jobs=1): err= 0: pid=3326203: Mon May 13 20:49:07 2024 00:35:52.996 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10009msec) 00:35:52.996 slat (nsec): min=5869, max=86738, avg=18316.70, stdev=11241.22 00:35:52.996 clat (usec): min=11892, max=35648, avg=31325.47, stdev=1313.33 00:35:52.996 lat (usec): min=11904, max=35686, avg=31343.78, stdev=1313.32 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[28967], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:35:52.996 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.996 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.996 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:35:52.996 | 99.99th=[35390] 00:35:52.996 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=2034.26, stdev=40.28, samples=19 00:35:52.996 iops : min= 480, max= 512, avg=508.53, stdev=10.06, samples=19 00:35:52.996 lat (msec) : 20=0.63%, 50=99.37% 00:35:52.996 cpu : usr=97.68%, sys=1.30%, ctx=56, majf=0, minf=21 00:35:52.996 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.996 filename2: (groupid=0, jobs=1): err= 0: pid=3326204: Mon May 13 20:49:07 2024 00:35:52.996 read: IOPS=546, BW=2188KiB/s (2240kB/s)(21.4MiB/10002msec) 00:35:52.996 slat (nsec): min=2804, max=74635, avg=12291.40, stdev=8764.39 00:35:52.996 clat (usec): min=4100, max=52485, avg=29164.32, stdev=5826.59 00:35:52.996 lat (usec): min=4105, max=52504, avg=29176.61, stdev=5828.82 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[ 8029], 5.00th=[18744], 10.00th=[20841], 20.00th=[24773], 00:35:52.996 | 30.00th=[30278], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:35:52.996 | 70.00th=[31589], 80.00th=[31589], 90.00th=[32113], 95.00th=[32900], 00:35:52.996 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52167], 99.95th=[52691], 00:35:52.996 | 99.99th=[52691] 00:35:52.996 bw ( KiB/s): min= 1920, max= 2560, per=4.48%, avg=2181.89, stdev=175.45, samples=19 00:35:52.996 iops : min= 480, max= 640, avg=545.47, stdev=43.86, samples=19 00:35:52.996 lat (msec) : 10=1.12%, 20=6.86%, 50=91.08%, 100=0.95% 00:35:52.996 cpu : usr=98.91%, sys=0.78%, ctx=72, majf=0, minf=29 00:35:52.996 IO depths : 1=3.6%, 2=7.2%, 4=17.0%, 8=62.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:52.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 complete : 0=0.0%, 4=92.2%, 8=2.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.996 issued rwts: total=5470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.996 filename2: (groupid=0, jobs=1): err= 0: pid=3326205: Mon May 13 20:49:07 2024 00:35:52.996 read: IOPS=513, BW=2056KiB/s (2105kB/s)(20.1MiB/10027msec) 00:35:52.996 slat (usec): min=4, max=113, avg=18.32, stdev=14.49 00:35:52.996 clat (usec): min=4799, max=53942, avg=30980.03, stdev=3242.05 00:35:52.996 lat (usec): min=4807, max=53953, avg=30998.35, stdev=3242.54 00:35:52.996 clat percentiles (usec): 00:35:52.996 | 1.00th=[ 8455], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:35:52.996 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.996 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.996 | 99.00th=[33162], 99.50th=[33424], 99.90th=[49546], 99.95th=[49546], 00:35:52.996 | 99.99th=[53740] 00:35:52.996 bw ( KiB/s): min= 1920, max= 2440, per=4.22%, avg=2054.30, stdev=107.23, samples=20 00:35:52.996 iops : min= 480, max= 610, avg=513.50, stdev=26.82, samples=20 00:35:52.996 lat (msec) : 10=1.26%, 20=0.62%, 50=98.08%, 100=0.04% 00:35:52.997 cpu : usr=97.83%, sys=1.18%, ctx=63, majf=0, minf=27 00:35:52.997 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:52.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 issued rwts: total=5153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.997 filename2: (groupid=0, jobs=1): err= 0: pid=3326206: Mon May 13 20:49:07 2024 00:35:52.997 read: IOPS=507, BW=2029KiB/s (2077kB/s)(19.8MiB/10001msec) 00:35:52.997 slat (nsec): min=5737, max=76398, avg=17582.01, stdev=11654.54 00:35:52.997 clat (usec): min=17937, max=33298, avg=31399.82, stdev=934.96 00:35:52.997 lat (usec): min=17962, max=33306, avg=31417.40, stdev=934.16 00:35:52.997 clat percentiles (usec): 00:35:52.997 | 1.00th=[30016], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:35:52.997 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.997 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32637], 00:35:52.997 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:35:52.997 | 99.99th=[33424] 00:35:52.997 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=2027.53, stdev=47.85, samples=19 00:35:52.997 iops : min= 480, max= 512, avg=506.84, stdev=11.95, samples=19 00:35:52.997 lat (msec) : 20=0.32%, 50=99.68% 00:35:52.997 cpu : usr=99.23%, sys=0.48%, ctx=65, majf=0, minf=20 00:35:52.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.997 filename2: (groupid=0, jobs=1): err= 0: pid=3326207: Mon May 13 20:49:07 2024 00:35:52.997 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10013msec) 00:35:52.997 slat (usec): min=5, max=115, avg=23.16, stdev=15.88 00:35:52.997 clat (usec): min=20730, max=42445, avg=31387.19, stdev=1042.85 00:35:52.997 lat (usec): min=20739, max=42463, avg=31410.34, stdev=1040.73 00:35:52.997 clat percentiles (usec): 00:35:52.997 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.997 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.997 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.997 | 99.00th=[33162], 99.50th=[34341], 99.90th=[42206], 99.95th=[42206], 00:35:52.997 | 99.99th=[42206] 00:35:52.997 bw ( KiB/s): min= 1920, max= 2064, per=4.16%, avg=2027.68, stdev=47.78, samples=19 00:35:52.997 iops : min= 480, max= 516, avg=506.84, stdev=12.02, samples=19 00:35:52.997 lat (msec) : 50=100.00% 00:35:52.997 cpu : usr=98.38%, sys=0.93%, ctx=29, majf=0, minf=17 00:35:52.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.997 filename2: (groupid=0, jobs=1): err= 0: pid=3326208: Mon May 13 20:49:07 2024 00:35:52.997 read: IOPS=490, BW=1960KiB/s (2007kB/s)(19.1MiB/10002msec) 00:35:52.997 slat (nsec): min=5734, max=83555, avg=13052.66, stdev=10612.18 00:35:52.997 clat (usec): min=6772, max=68223, avg=32561.66, stdev=4538.87 00:35:52.997 lat (usec): min=6778, max=68238, avg=32574.71, stdev=4538.98 00:35:52.997 clat percentiles (usec): 00:35:52.997 | 1.00th=[22152], 5.00th=[27657], 10.00th=[30802], 20.00th=[31327], 00:35:52.997 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:35:52.997 | 70.00th=[31851], 80.00th=[32375], 90.00th=[38536], 95.00th=[44303], 00:35:52.997 | 99.00th=[46400], 99.50th=[47973], 99.90th=[57410], 99.95th=[57410], 00:35:52.997 | 99.99th=[68682] 00:35:52.997 bw ( KiB/s): min= 1568, max= 2048, per=4.00%, avg=1948.79, stdev=148.74, samples=19 00:35:52.997 iops : min= 392, max= 512, avg=487.16, stdev=37.23, samples=19 00:35:52.997 lat (msec) : 10=0.06%, 20=0.59%, 50=99.02%, 100=0.33% 00:35:52.997 cpu : usr=99.07%, sys=0.59%, ctx=67, majf=0, minf=27 00:35:52.997 IO depths : 1=2.2%, 2=6.0%, 4=17.7%, 8=62.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:52.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 complete : 0=0.0%, 4=92.7%, 8=2.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 issued rwts: total=4902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.997 filename2: (groupid=0, jobs=1): err= 0: pid=3326209: Mon May 13 20:49:07 2024 00:35:52.997 read: IOPS=506, BW=2027KiB/s (2075kB/s)(19.8MiB/10011msec) 00:35:52.997 slat (nsec): min=5766, max=92469, avg=21341.95, stdev=14518.94 00:35:52.997 clat (usec): min=12952, max=45892, avg=31389.49, stdev=1353.63 00:35:52.997 lat (usec): min=12959, max=45914, avg=31410.83, stdev=1352.80 00:35:52.997 clat percentiles (usec): 00:35:52.997 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30802], 20.00th=[31065], 00:35:52.997 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:35:52.997 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32113], 95.00th=[32375], 00:35:52.997 | 99.00th=[33162], 99.50th=[34341], 99.90th=[45876], 99.95th=[45876], 00:35:52.997 | 99.99th=[45876] 00:35:52.997 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=2020.79, stdev=53.49, samples=19 00:35:52.997 iops : min= 480, max= 512, avg=505.16, stdev=13.36, samples=19 00:35:52.997 lat (msec) : 20=0.32%, 50=99.68% 00:35:52.997 cpu : usr=99.26%, sys=0.46%, ctx=13, majf=0, minf=18 00:35:52.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.997 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.997 00:35:52.997 Run status group 0 (all jobs): 00:35:52.997 READ: bw=47.5MiB/s (49.8MB/s), 1960KiB/s-2188KiB/s (2007kB/s-2240kB/s), io=477MiB (500MB), run=10001-10029msec 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:52.997 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 bdev_null0 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 [2024-05-13 20:49:07.656611] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 bdev_null1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.998 { 00:35:52.998 "params": { 00:35:52.998 "name": "Nvme$subsystem", 00:35:52.998 "trtype": "$TEST_TRANSPORT", 00:35:52.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.998 "adrfam": "ipv4", 00:35:52.998 "trsvcid": "$NVMF_PORT", 00:35:52.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.998 "hdgst": ${hdgst:-false}, 00:35:52.998 "ddgst": ${ddgst:-false} 00:35:52.998 }, 00:35:52.998 "method": "bdev_nvme_attach_controller" 00:35:52.998 } 00:35:52.998 EOF 00:35:52.998 )") 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.998 { 00:35:52.998 "params": { 00:35:52.998 "name": "Nvme$subsystem", 00:35:52.998 "trtype": "$TEST_TRANSPORT", 00:35:52.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.998 "adrfam": "ipv4", 00:35:52.998 "trsvcid": "$NVMF_PORT", 00:35:52.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.998 "hdgst": ${hdgst:-false}, 00:35:52.998 "ddgst": ${ddgst:-false} 00:35:52.998 }, 00:35:52.998 "method": "bdev_nvme_attach_controller" 00:35:52.998 } 00:35:52.998 EOF 00:35:52.998 )") 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:52.998 20:49:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:52.998 "params": { 00:35:52.998 "name": "Nvme0", 00:35:52.998 "trtype": "tcp", 00:35:52.998 "traddr": "10.0.0.2", 00:35:52.998 "adrfam": "ipv4", 00:35:52.998 "trsvcid": "4420", 00:35:52.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.998 "hdgst": false, 00:35:52.998 "ddgst": false 00:35:52.998 }, 00:35:52.998 "method": "bdev_nvme_attach_controller" 00:35:52.998 },{ 00:35:52.998 "params": { 00:35:52.998 "name": "Nvme1", 00:35:52.998 "trtype": "tcp", 00:35:52.998 "traddr": "10.0.0.2", 00:35:52.998 "adrfam": "ipv4", 00:35:52.998 "trsvcid": "4420", 00:35:52.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:52.999 "hdgst": false, 00:35:52.999 "ddgst": false 00:35:52.999 }, 00:35:52.999 "method": "bdev_nvme_attach_controller" 00:35:52.999 }' 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.999 20:49:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.999 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:52.999 ... 00:35:52.999 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:52.999 ... 00:35:52.999 fio-3.35 00:35:52.999 Starting 4 threads 00:35:52.999 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.291 00:35:58.291 filename0: (groupid=0, jobs=1): err= 0: pid=3328680: Mon May 13 20:49:13 2024 00:35:58.291 read: IOPS=2142, BW=16.7MiB/s (17.6MB/s)(83.8MiB/5003msec) 00:35:58.291 slat (nsec): min=5581, max=61571, avg=8264.62, stdev=3712.73 00:35:58.291 clat (usec): min=1522, max=6815, avg=3710.04, stdev=511.03 00:35:58.291 lat (usec): min=1531, max=6822, avg=3718.30, stdev=511.00 00:35:58.291 clat percentiles (usec): 00:35:58.291 | 1.00th=[ 2704], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3425], 00:35:58.291 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3687], 60.00th=[ 3720], 00:35:58.291 | 70.00th=[ 3752], 80.00th=[ 3818], 90.00th=[ 4228], 95.00th=[ 4883], 00:35:58.291 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6587], 99.95th=[ 6652], 00:35:58.291 | 99.99th=[ 6783] 00:35:58.291 bw ( KiB/s): min=16560, max=17808, per=25.16%, avg=17145.60, stdev=395.76, samples=10 00:35:58.291 iops : min= 2070, max= 2226, avg=2143.20, stdev=49.47, samples=10 00:35:58.291 lat (msec) : 2=0.07%, 4=84.42%, 10=15.50% 00:35:58.291 cpu : usr=97.12%, sys=2.62%, ctx=10, majf=0, minf=9 00:35:58.291 IO depths : 1=0.2%, 2=0.8%, 4=71.8%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.291 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.291 issued rwts: total=10721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:58.291 filename0: (groupid=0, jobs=1): err= 0: pid=3328681: Mon May 13 20:49:13 2024 00:35:58.291 read: IOPS=2118, BW=16.6MiB/s (17.4MB/s)(82.8MiB/5002msec) 00:35:58.291 slat (nsec): min=5563, max=63582, avg=7154.77, stdev=3741.14 00:35:58.291 clat (usec): min=1441, max=6791, avg=3756.44, stdev=604.16 00:35:58.291 lat (usec): min=1447, max=6796, avg=3763.60, stdev=603.92 00:35:58.291 clat percentiles (usec): 00:35:58.291 | 1.00th=[ 2540], 5.00th=[ 3032], 10.00th=[ 3228], 20.00th=[ 3392], 00:35:58.291 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3687], 00:35:58.291 | 70.00th=[ 3752], 80.00th=[ 3884], 90.00th=[ 4686], 95.00th=[ 5342], 00:35:58.291 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 6194], 00:35:58.291 | 99.99th=[ 6783] 00:35:58.291 bw ( KiB/s): min=16304, max=17200, per=24.87%, avg=16942.40, stdev=275.22, samples=10 00:35:58.291 iops : min= 2038, max= 2150, avg=2117.80, stdev=34.40, samples=10 00:35:58.291 lat (msec) : 2=0.08%, 4=82.02%, 10=17.90% 00:35:58.291 cpu : usr=97.46%, sys=2.30%, ctx=13, majf=0, minf=9 00:35:58.291 IO depths : 1=0.2%, 2=0.8%, 4=71.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.291 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.291 issued rwts: total=10597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:58.291 filename1: (groupid=0, jobs=1): err= 0: pid=3328682: Mon May 13 20:49:13 2024 00:35:58.291 read: IOPS=2154, BW=16.8MiB/s (17.6MB/s)(84.2MiB/5004msec) 00:35:58.291 slat (nsec): min=5565, max=51872, avg=6753.26, stdev=2962.83 00:35:58.291 clat (usec): min=1339, max=44702, avg=3693.64, stdev=1242.18 00:35:58.291 lat (usec): min=1345, max=44728, avg=3700.40, stdev=1242.35 00:35:58.291 clat percentiles (usec): 00:35:58.291 | 1.00th=[ 2474], 5.00th=[ 2868], 10.00th=[ 3130], 20.00th=[ 3359], 00:35:58.291 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3654], 60.00th=[ 3720], 00:35:58.291 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 4228], 95.00th=[ 4817], 00:35:58.291 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6587], 99.95th=[44827], 00:35:58.291 | 99.99th=[44827] 00:35:58.291 bw ( KiB/s): min=16784, max=18112, per=25.31%, avg=17241.60, stdev=444.76, samples=10 00:35:58.291 iops : min= 2098, max= 2264, avg=2155.20, stdev=55.60, samples=10 00:35:58.291 lat (msec) : 2=0.15%, 4=84.74%, 10=15.04%, 50=0.07% 00:35:58.291 cpu : usr=97.70%, sys=2.04%, ctx=11, majf=0, minf=9 00:35:58.291 IO depths : 1=0.1%, 2=1.1%, 4=71.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.291 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.291 issued rwts: total=10781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.291 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:58.291 filename1: (groupid=0, jobs=1): err= 0: pid=3328683: Mon May 13 20:49:13 2024 00:35:58.291 read: IOPS=2102, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5002msec) 00:35:58.291 slat (nsec): min=5572, max=60813, avg=7009.25, stdev=3532.64 00:35:58.291 clat (usec): min=2193, max=6693, avg=3784.26, stdev=598.26 00:35:58.291 lat (usec): min=2198, max=6699, avg=3791.27, stdev=598.02 00:35:58.291 clat percentiles (usec): 00:35:58.291 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3294], 20.00th=[ 3425], 00:35:58.291 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:35:58.291 | 70.00th=[ 3752], 80.00th=[ 3884], 90.00th=[ 4752], 95.00th=[ 5342], 00:35:58.291 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 6390], 99.95th=[ 6521], 00:35:58.291 | 99.99th=[ 6718] 00:35:58.291 bw ( KiB/s): min=16512, max=17216, per=24.69%, avg=16824.00, stdev=222.50, samples=10 00:35:58.291 iops : min= 2064, max= 2152, avg=2103.00, stdev=27.81, samples=10 00:35:58.291 lat (msec) : 4=81.92%, 10=18.08% 00:35:58.291 cpu : usr=97.34%, sys=2.42%, ctx=10, majf=0, minf=9 00:35:58.292 IO depths : 1=0.2%, 2=0.9%, 4=72.1%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.292 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.292 issued rwts: total=10518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.292 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:58.292 00:35:58.292 Run status group 0 (all jobs): 00:35:58.292 READ: bw=66.5MiB/s (69.8MB/s), 16.4MiB/s-16.8MiB/s (17.2MB/s-17.6MB/s), io=333MiB (349MB), run=5002-5004msec 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 00:35:58.292 real 0m24.194s 00:35:58.292 user 5m16.042s 00:35:58.292 sys 0m3.889s 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 ************************************ 00:35:58.292 END TEST fio_dif_rand_params 00:35:58.292 ************************************ 00:35:58.292 20:49:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:58.292 20:49:14 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:58.292 20:49:14 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 ************************************ 00:35:58.292 START TEST fio_dif_digest 00:35:58.292 ************************************ 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 bdev_null0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 [2024-05-13 20:49:14.195594] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.292 { 00:35:58.292 "params": { 00:35:58.292 "name": "Nvme$subsystem", 00:35:58.292 "trtype": "$TEST_TRANSPORT", 00:35:58.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.292 "adrfam": "ipv4", 00:35:58.292 "trsvcid": "$NVMF_PORT", 00:35:58.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.292 "hdgst": ${hdgst:-false}, 00:35:58.292 "ddgst": ${ddgst:-false} 00:35:58.292 }, 00:35:58.292 "method": "bdev_nvme_attach_controller" 00:35:58.292 } 00:35:58.292 EOF 00:35:58.292 )") 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:58.292 20:49:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:58.292 "params": { 00:35:58.292 "name": "Nvme0", 00:35:58.292 "trtype": "tcp", 00:35:58.292 "traddr": "10.0.0.2", 00:35:58.292 "adrfam": "ipv4", 00:35:58.292 "trsvcid": "4420", 00:35:58.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:58.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:58.292 "hdgst": true, 00:35:58.292 "ddgst": true 00:35:58.292 }, 00:35:58.292 "method": "bdev_nvme_attach_controller" 00:35:58.292 }' 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:58.576 20:49:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.842 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:58.842 ... 00:35:58.842 fio-3.35 00:35:58.842 Starting 3 threads 00:35:58.842 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.073 00:36:11.073 filename0: (groupid=0, jobs=1): err= 0: pid=3329900: Mon May 13 20:49:25 2024 00:36:11.073 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(277MiB/10047msec) 00:36:11.073 slat (nsec): min=5763, max=31850, avg=6726.37, stdev=1035.79 00:36:11.073 clat (usec): min=7326, max=56389, avg=13578.80, stdev=4152.08 00:36:11.073 lat (usec): min=7332, max=56396, avg=13585.53, stdev=4152.08 00:36:11.073 clat percentiles (usec): 00:36:11.073 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[10945], 20.00th=[12125], 00:36:11.073 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:36:11.073 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:36:11.073 | 99.00th=[16909], 99.50th=[54264], 99.90th=[55313], 99.95th=[55837], 00:36:11.073 | 99.99th=[56361] 00:36:11.073 bw ( KiB/s): min=25088, max=30720, per=35.68%, avg=28326.40, stdev=1578.69, samples=20 00:36:11.073 iops : min= 196, max= 240, avg=221.30, stdev=12.33, samples=20 00:36:11.073 lat (msec) : 10=3.88%, 20=95.21%, 50=0.05%, 100=0.86% 00:36:11.073 cpu : usr=95.64%, sys=4.12%, ctx=17, majf=0, minf=176 00:36:11.073 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.073 issued rwts: total=2215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.073 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:11.073 filename0: (groupid=0, jobs=1): err= 0: pid=3329901: Mon May 13 20:49:25 2024 00:36:11.073 read: IOPS=190, BW=23.9MiB/s (25.0MB/s)(240MiB/10046msec) 00:36:11.073 slat (nsec): min=5789, max=31918, avg=6754.42, stdev=948.90 00:36:11.073 clat (usec): min=6512, max=99599, avg=15681.43, stdev=7927.81 00:36:11.073 lat (usec): min=6518, max=99606, avg=15688.19, stdev=7927.82 00:36:11.073 clat percentiles (usec): 00:36:11.073 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[12256], 20.00th=[13304], 00:36:11.073 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:36:11.073 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16319], 95.00th=[17171], 00:36:11.073 | 99.00th=[55837], 99.50th=[56361], 99.90th=[96994], 99.95th=[99091], 00:36:11.073 | 99.99th=[99091] 00:36:11.073 bw ( KiB/s): min=19712, max=28672, per=30.90%, avg=24527.05, stdev=2444.79, samples=20 00:36:11.073 iops : min= 154, max= 224, avg=191.60, stdev=19.11, samples=20 00:36:11.073 lat (msec) : 10=1.93%, 20=94.68%, 50=0.21%, 100=3.18% 00:36:11.073 cpu : usr=96.24%, sys=3.54%, ctx=23, majf=0, minf=79 00:36:11.073 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.073 issued rwts: total=1918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.073 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:11.073 filename0: (groupid=0, jobs=1): err= 0: pid=3329902: Mon May 13 20:49:25 2024 00:36:11.073 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(262MiB/10046msec) 00:36:11.073 slat (nsec): min=5952, max=33127, avg=8000.55, stdev=1679.19 00:36:11.073 clat (usec): min=8834, max=56915, avg=14334.14, stdev=5529.75 00:36:11.073 lat (usec): min=8840, max=56924, avg=14342.14, stdev=5529.79 00:36:11.073 clat percentiles (usec): 00:36:11.073 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[11076], 20.00th=[12518], 00:36:11.073 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14222], 00:36:11.073 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15664], 95.00th=[16188], 00:36:11.073 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:36:11.073 | 99.99th=[56886] 00:36:11.073 bw ( KiB/s): min=22272, max=29696, per=33.80%, avg=26828.80, stdev=2204.70, samples=20 00:36:11.073 iops : min= 174, max= 232, avg=209.60, stdev=17.22, samples=20 00:36:11.073 lat (msec) : 10=2.43%, 20=95.90%, 50=0.05%, 100=1.62% 00:36:11.073 cpu : usr=95.93%, sys=3.84%, ctx=21, majf=0, minf=156 00:36:11.073 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.073 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.073 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:11.073 00:36:11.073 Run status group 0 (all jobs): 00:36:11.073 READ: bw=77.5MiB/s (81.3MB/s), 23.9MiB/s-27.6MiB/s (25.0MB/s-28.9MB/s), io=779MiB (817MB), run=10046-10047msec 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.073 00:36:11.073 real 0m11.207s 00:36:11.073 user 0m45.148s 00:36:11.073 sys 0m1.490s 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:11.073 20:49:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.073 ************************************ 00:36:11.073 END TEST fio_dif_digest 00:36:11.073 ************************************ 00:36:11.073 20:49:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:11.073 20:49:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:11.074 rmmod nvme_tcp 00:36:11.074 rmmod nvme_fabrics 00:36:11.074 rmmod nvme_keyring 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3319648 ']' 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3319648 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3319648 ']' 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3319648 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3319648 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3319648' 00:36:11.074 killing process with pid 3319648 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3319648 00:36:11.074 [2024-05-13 20:49:25.542956] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:11.074 20:49:25 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3319648 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:11.074 20:49:25 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:13.618 Waiting for block devices as requested 00:36:13.618 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:13.618 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:13.618 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:13.618 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:13.879 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:13.879 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:13.879 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:13.879 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:14.140 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:14.140 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:14.400 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:14.400 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:14.400 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:14.400 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:14.660 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:14.660 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:14.660 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:14.921 20:49:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:14.921 20:49:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:14.921 20:49:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:14.921 20:49:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:14.921 20:49:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.921 20:49:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:14.921 20:49:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.468 20:49:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:17.468 00:36:17.468 real 1m19.149s 00:36:17.468 user 8m4.121s 00:36:17.468 sys 0m20.624s 00:36:17.468 20:49:32 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:17.468 20:49:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.468 ************************************ 00:36:17.468 END TEST nvmf_dif 00:36:17.468 ************************************ 00:36:17.468 20:49:32 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:17.468 20:49:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:17.468 20:49:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:17.468 20:49:32 -- common/autotest_common.sh@10 -- # set +x 00:36:17.468 ************************************ 00:36:17.468 START TEST nvmf_abort_qd_sizes 00:36:17.468 ************************************ 00:36:17.468 20:49:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:17.468 * Looking for test storage... 00:36:17.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:17.468 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:17.469 20:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:25.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:25.612 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:25.613 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:25.613 Found net devices under 0000:31:00.0: cvl_0_0 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:25.613 Found net devices under 0000:31:00.1: cvl_0_1 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:25.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:25.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:36:25.613 00:36:25.613 --- 10.0.0.2 ping statistics --- 00:36:25.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.613 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:36:25.613 20:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:25.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:25.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:36:25.613 00:36:25.613 --- 10.0.0.1 ping statistics --- 00:36:25.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.613 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:36:25.613 20:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:25.613 20:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:25.613 20:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:25.613 20:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:28.919 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:28.919 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:29.179 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:29.179 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:29.179 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3340248 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3340248 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3340248 ']' 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:29.440 20:49:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.440 [2024-05-13 20:49:45.323894] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:36:29.440 [2024-05-13 20:49:45.323939] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.440 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.700 [2024-05-13 20:49:45.394527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:29.700 [2024-05-13 20:49:45.460911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.700 [2024-05-13 20:49:45.460948] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.700 [2024-05-13 20:49:45.460956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.700 [2024-05-13 20:49:45.460962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.700 [2024-05-13 20:49:45.460967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.700 [2024-05-13 20:49:45.461109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.700 [2024-05-13 20:49:45.461240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:29.700 [2024-05-13 20:49:45.461386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.700 [2024-05-13 20:49:45.461389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:30.275 20:49:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.275 ************************************ 00:36:30.275 START TEST spdk_target_abort 00:36:30.275 ************************************ 00:36:30.275 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:30.275 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:30.275 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:30.275 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.275 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.560 spdk_targetn1 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.836 [2024-05-13 20:49:46.501320] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.836 [2024-05-13 20:49:46.541367] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:30.836 [2024-05-13 20:49:46.541601] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:30.836 20:49:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:30.836 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.836 [2024-05-13 20:49:46.755862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:856 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:36:30.836 [2024-05-13 20:49:46.755887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:36:31.097 [2024-05-13 20:49:46.779753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1648 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:36:31.097 [2024-05-13 20:49:46.779770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:36:31.097 [2024-05-13 20:49:46.803800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2456 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:31.097 [2024-05-13 20:49:46.803816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:31.097 [2024-05-13 20:49:46.850762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4016 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:31.097 [2024-05-13 20:49:46.850779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:36:34.432 Initializing NVMe Controllers 00:36:34.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.432 Initialization complete. Launching workers. 00:36:34.432 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12300, failed: 4 00:36:34.432 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3639, failed to submit 8665 00:36:34.432 success 793, unsuccess 2846, failed 0 00:36:34.432 20:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.432 20:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.432 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.432 [2024-05-13 20:49:50.065641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:528 len:8 PRP1 0x200007c56000 PRP2 0x0 00:36:34.432 [2024-05-13 20:49:50.065687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:36:34.432 [2024-05-13 20:49:50.137425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:1976 len:8 PRP1 0x200007c40000 PRP2 0x0 00:36:34.432 [2024-05-13 20:49:50.137458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:36:34.432 [2024-05-13 20:49:50.153467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:2384 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:36:34.432 [2024-05-13 20:49:50.153490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:34.432 [2024-05-13 20:49:50.161370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2616 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:36:34.432 [2024-05-13 20:49:50.161391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:34.432 [2024-05-13 20:49:50.209363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3608 len:8 PRP1 0x200007c40000 PRP2 0x0 00:36:34.432 [2024-05-13 20:49:50.209386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00cf p:0 m:0 dnr:0 00:36:34.432 [2024-05-13 20:49:50.225381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:3992 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:36:34.432 [2024-05-13 20:49:50.225402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00fd p:0 m:0 dnr:0 00:36:36.975 [2024-05-13 20:49:52.404361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:53080 len:8 PRP1 0x200007c50000 PRP2 0x0 00:36:36.975 [2024-05-13 20:49:52.404399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00f2 p:0 m:0 dnr:0 00:36:37.546 Initializing NVMe Controllers 00:36:37.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.547 Initialization complete. Launching workers. 00:36:37.547 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8530, failed: 7 00:36:37.547 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1282, failed to submit 7255 00:36:37.547 success 351, unsuccess 931, failed 0 00:36:37.547 20:49:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.547 20:49:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.547 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.089 [2024-05-13 20:49:55.766703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:173 nsid:1 lba:284136 len:8 PRP1 0x200007922000 PRP2 0x0 00:36:40.089 [2024-05-13 20:49:55.766738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:173 cdw0:0 sqhd:00cf p:1 m:0 dnr:0 00:36:40.661 Initializing NVMe Controllers 00:36:40.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.661 Initialization complete. Launching workers. 00:36:40.661 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43355, failed: 1 00:36:40.661 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2648, failed to submit 40708 00:36:40.661 success 565, unsuccess 2083, failed 0 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.661 20:49:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3340248 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3340248 ']' 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3340248 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3340248 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3340248' 00:36:42.574 killing process with pid 3340248 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3340248 00:36:42.574 [2024-05-13 20:49:58.243470] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3340248 00:36:42.574 00:36:42.574 real 0m12.189s 00:36:42.574 user 0m49.462s 00:36:42.574 sys 0m1.913s 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:42.574 ************************************ 00:36:42.574 END TEST spdk_target_abort 00:36:42.574 ************************************ 00:36:42.574 20:49:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:42.574 20:49:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:42.574 20:49:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:42.574 20:49:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.574 ************************************ 00:36:42.574 START TEST kernel_target_abort 00:36:42.574 ************************************ 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:42.574 20:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:46.781 Waiting for block devices as requested 00:36:46.781 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:46.781 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:47.042 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:47.042 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:47.042 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:47.303 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:47.303 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:47.303 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:47.303 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:47.564 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:47.826 No valid GPT data, bailing 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:47.826 00:36:47.826 Discovery Log Number of Records 2, Generation counter 2 00:36:47.826 =====Discovery Log Entry 0====== 00:36:47.826 trtype: tcp 00:36:47.826 adrfam: ipv4 00:36:47.826 subtype: current discovery subsystem 00:36:47.826 treq: not specified, sq flow control disable supported 00:36:47.826 portid: 1 00:36:47.826 trsvcid: 4420 00:36:47.826 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:47.826 traddr: 10.0.0.1 00:36:47.826 eflags: none 00:36:47.826 sectype: none 00:36:47.826 =====Discovery Log Entry 1====== 00:36:47.826 trtype: tcp 00:36:47.826 adrfam: ipv4 00:36:47.826 subtype: nvme subsystem 00:36:47.826 treq: not specified, sq flow control disable supported 00:36:47.826 portid: 1 00:36:47.826 trsvcid: 4420 00:36:47.826 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:47.826 traddr: 10.0.0.1 00:36:47.826 eflags: none 00:36:47.826 sectype: none 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.826 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.827 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.827 20:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.827 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.128 Initializing NVMe Controllers 00:36:51.128 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:51.128 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:51.128 Initialization complete. Launching workers. 00:36:51.128 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56826, failed: 0 00:36:51.128 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56826, failed to submit 0 00:36:51.128 success 0, unsuccess 56826, failed 0 00:36:51.128 20:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:51.128 20:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:51.128 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.426 Initializing NVMe Controllers 00:36:54.426 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.426 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.426 Initialization complete. Launching workers. 00:36:54.426 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98912, failed: 0 00:36:54.426 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24978, failed to submit 73934 00:36:54.426 success 0, unsuccess 24978, failed 0 00:36:54.426 20:50:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.426 20:50:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.426 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.724 Initializing NVMe Controllers 00:36:57.724 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.724 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.724 Initialization complete. Launching workers. 00:36:57.724 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95289, failed: 0 00:36:57.724 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23822, failed to submit 71467 00:36:57.724 success 0, unsuccess 23822, failed 0 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:57.724 20:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:01.026 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:01.026 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:02.409 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:02.980 00:37:02.980 real 0m20.205s 00:37:02.980 user 0m8.980s 00:37:02.980 sys 0m6.303s 00:37:02.980 20:50:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:02.980 20:50:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.980 ************************************ 00:37:02.980 END TEST kernel_target_abort 00:37:02.980 ************************************ 00:37:02.980 20:50:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:02.980 20:50:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:02.981 rmmod nvme_tcp 00:37:02.981 rmmod nvme_fabrics 00:37:02.981 rmmod nvme_keyring 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3340248 ']' 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3340248 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3340248 ']' 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3340248 00:37:02.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3340248) - No such process 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3340248 is not found' 00:37:02.981 Process with pid 3340248 is not found 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:02.981 20:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.189 Waiting for block devices as requested 00:37:07.189 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:07.189 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:07.450 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:07.450 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:07.712 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:07.712 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:07.712 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:07.712 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:07.972 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:07.972 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.231 20:50:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.145 20:50:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:10.145 00:37:10.145 real 0m53.139s 00:37:10.145 user 1m4.241s 00:37:10.145 sys 0m19.831s 00:37:10.145 20:50:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:10.145 20:50:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:10.145 ************************************ 00:37:10.145 END TEST nvmf_abort_qd_sizes 00:37:10.146 ************************************ 00:37:10.407 20:50:26 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:10.407 20:50:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:10.407 20:50:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:10.407 20:50:26 -- common/autotest_common.sh@10 -- # set +x 00:37:10.407 ************************************ 00:37:10.407 START TEST keyring_file 00:37:10.408 ************************************ 00:37:10.408 20:50:26 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:10.408 * Looking for test storage... 00:37:10.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.408 20:50:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.408 20:50:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.408 20:50:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.408 20:50:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.408 20:50:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.408 20:50:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.408 20:50:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:10.408 20:50:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RwF8esV6is 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RwF8esV6is 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RwF8esV6is 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.RwF8esV6is 00:37:10.408 20:50:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0wdCNTwDvD 00:37:10.408 20:50:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.408 20:50:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.670 20:50:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0wdCNTwDvD 00:37:10.670 20:50:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0wdCNTwDvD 00:37:10.670 20:50:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0wdCNTwDvD 00:37:10.670 20:50:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=3351130 00:37:10.670 20:50:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3351130 00:37:10.670 20:50:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:10.670 20:50:26 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3351130 ']' 00:37:10.670 20:50:26 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.670 20:50:26 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:10.670 20:50:26 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.670 20:50:26 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:10.670 20:50:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:10.670 [2024-05-13 20:50:26.431016] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:37:10.670 [2024-05-13 20:50:26.431067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351130 ] 00:37:10.670 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.670 [2024-05-13 20:50:26.496749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.670 [2024-05-13 20:50:26.561365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.613 20:50:27 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:11.613 20:50:27 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:11.613 20:50:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:11.613 20:50:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.613 20:50:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.614 [2024-05-13 20:50:27.211013] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.614 null0 00:37:11.614 [2024-05-13 20:50:27.243033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:11.614 [2024-05-13 20:50:27.243081] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:11.614 [2024-05-13 20:50:27.243369] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:11.614 [2024-05-13 20:50:27.251063] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.614 20:50:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.614 [2024-05-13 20:50:27.263096] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:11.614 request: 00:37:11.614 { 00:37:11.614 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.614 "secure_channel": false, 00:37:11.614 "listen_address": { 00:37:11.614 "trtype": "tcp", 00:37:11.614 "traddr": "127.0.0.1", 00:37:11.614 "trsvcid": "4420" 00:37:11.614 }, 00:37:11.614 "method": "nvmf_subsystem_add_listener", 00:37:11.614 "req_id": 1 00:37:11.614 } 00:37:11.614 Got JSON-RPC error response 00:37:11.614 response: 00:37:11.614 { 00:37:11.614 "code": -32602, 00:37:11.614 "message": "Invalid parameters" 00:37:11.614 } 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:11.614 20:50:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=3351148 00:37:11.614 20:50:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3351148 /var/tmp/bperf.sock 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3351148 ']' 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:11.614 20:50:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.614 20:50:27 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:11.614 [2024-05-13 20:50:27.323933] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:37:11.614 [2024-05-13 20:50:27.323992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351148 ] 00:37:11.614 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.614 [2024-05-13 20:50:27.404211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.614 [2024-05-13 20:50:27.468701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.187 20:50:28 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:12.187 20:50:28 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:12.187 20:50:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:12.187 20:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:12.447 20:50:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0wdCNTwDvD 00:37:12.447 20:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0wdCNTwDvD 00:37:12.758 20:50:28 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:12.758 20:50:28 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:12.758 20:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.758 20:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.758 20:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.758 20:50:28 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.RwF8esV6is == \/\t\m\p\/\t\m\p\.\R\w\F\8\e\s\V\6\i\s ]] 00:37:12.758 20:50:28 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:12.758 20:50:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:12.758 20:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.758 20:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:12.758 20:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.043 20:50:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0wdCNTwDvD == \/\t\m\p\/\t\m\p\.\0\w\d\C\N\T\w\D\v\D ]] 00:37:13.043 20:50:28 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.043 20:50:28 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:13.043 20:50:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.043 20:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.304 20:50:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:13.304 20:50:29 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.304 20:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.304 [2024-05-13 20:50:29.213328] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:13.565 nvme0n1 00:37:13.565 20:50:29 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.565 20:50:29 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:13.565 20:50:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.565 20:50:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.826 20:50:29 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:13.826 20:50:29 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.826 Running I/O for 1 seconds... 00:37:15.211 00:37:15.211 Latency(us) 00:37:15.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.211 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:15.211 nvme0n1 : 1.01 10320.84 40.32 0.00 0.00 12323.32 3549.87 16493.23 00:37:15.211 =================================================================================================================== 00:37:15.211 Total : 10320.84 40.32 0.00 0.00 12323.32 3549.87 16493.23 00:37:15.211 0 00:37:15.211 20:50:30 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:15.211 20:50:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:15.211 20:50:30 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:15.211 20:50:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.211 20:50:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.211 20:50:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.211 20:50:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.211 20:50:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.211 20:50:31 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:15.211 20:50:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:15.211 20:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:15.211 20:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.211 20:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.211 20:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:15.211 20:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.472 20:50:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:15.472 20:50:31 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.472 20:50:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.472 20:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.472 [2024-05-13 20:50:31.409220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:15.472 [2024-05-13 20:50:31.410047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362630 (107): Transport endpoint is not connected 00:37:15.472 [2024-05-13 20:50:31.411043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362630 (9): Bad file descriptor 00:37:15.472 [2024-05-13 20:50:31.412044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:15.472 [2024-05-13 20:50:31.412051] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:15.472 [2024-05-13 20:50:31.412056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:15.733 request: 00:37:15.733 { 00:37:15.733 "name": "nvme0", 00:37:15.733 "trtype": "tcp", 00:37:15.733 "traddr": "127.0.0.1", 00:37:15.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.733 "adrfam": "ipv4", 00:37:15.733 "trsvcid": "4420", 00:37:15.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.733 "psk": "key1", 00:37:15.733 "method": "bdev_nvme_attach_controller", 00:37:15.733 "req_id": 1 00:37:15.733 } 00:37:15.733 Got JSON-RPC error response 00:37:15.733 response: 00:37:15.733 { 00:37:15.733 "code": -32602, 00:37:15.733 "message": "Invalid parameters" 00:37:15.733 } 00:37:15.733 20:50:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:15.733 20:50:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.733 20:50:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.733 20:50:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.733 20:50:31 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.733 20:50:31 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:15.733 20:50:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:15.733 20:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.993 20:50:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:15.993 20:50:31 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:15.993 20:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:15.993 20:50:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:15.993 20:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:16.254 20:50:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:16.254 20:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.254 20:50:32 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:16.515 20:50:32 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:16.515 20:50:32 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.RwF8esV6is 00:37:16.515 20:50:32 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:16.515 20:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:16.515 [2024-05-13 20:50:32.389875] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.RwF8esV6is': 0100660 00:37:16.515 [2024-05-13 20:50:32.389892] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:16.515 request: 00:37:16.515 { 00:37:16.515 "name": "key0", 00:37:16.515 "path": "/tmp/tmp.RwF8esV6is", 00:37:16.515 "method": "keyring_file_add_key", 00:37:16.515 "req_id": 1 00:37:16.515 } 00:37:16.515 Got JSON-RPC error response 00:37:16.515 response: 00:37:16.515 { 00:37:16.515 "code": -1, 00:37:16.515 "message": "Operation not permitted" 00:37:16.515 } 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:16.515 20:50:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:16.515 20:50:32 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.RwF8esV6is 00:37:16.515 20:50:32 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:16.515 20:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RwF8esV6is 00:37:16.775 20:50:32 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.RwF8esV6is 00:37:16.775 20:50:32 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:16.775 20:50:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.775 20:50:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.775 20:50:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.775 20:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.775 20:50:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.037 20:50:32 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:17.037 20:50:32 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.037 20:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.037 [2024-05-13 20:50:32.867071] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.RwF8esV6is': No such file or directory 00:37:17.037 [2024-05-13 20:50:32.867084] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:17.037 [2024-05-13 20:50:32.867100] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:17.037 [2024-05-13 20:50:32.867105] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:17.037 [2024-05-13 20:50:32.867110] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:17.037 request: 00:37:17.037 { 00:37:17.037 "name": "nvme0", 00:37:17.037 "trtype": "tcp", 00:37:17.037 "traddr": "127.0.0.1", 00:37:17.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.037 "adrfam": "ipv4", 00:37:17.037 "trsvcid": "4420", 00:37:17.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.037 "psk": "key0", 00:37:17.037 "method": "bdev_nvme_attach_controller", 00:37:17.037 "req_id": 1 00:37:17.037 } 00:37:17.037 Got JSON-RPC error response 00:37:17.037 response: 00:37:17.037 { 00:37:17.037 "code": -19, 00:37:17.037 "message": "No such device" 00:37:17.037 } 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:17.037 20:50:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:17.037 20:50:32 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:17.037 20:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:17.298 20:50:33 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Gqb82FQNyj 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:17.298 20:50:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:17.298 20:50:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.298 20:50:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.298 20:50:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:17.298 20:50:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:17.298 20:50:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Gqb82FQNyj 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Gqb82FQNyj 00:37:17.298 20:50:33 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Gqb82FQNyj 00:37:17.298 20:50:33 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gqb82FQNyj 00:37:17.298 20:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gqb82FQNyj 00:37:17.559 20:50:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.559 20:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.559 nvme0n1 00:37:17.559 20:50:33 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:17.559 20:50:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.559 20:50:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.559 20:50:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.559 20:50:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.559 20:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.821 20:50:33 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:17.821 20:50:33 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:17.821 20:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:18.081 20:50:33 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:18.081 20:50:33 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.081 20:50:33 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:18.081 20:50:33 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.081 20:50:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.341 20:50:34 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:18.341 20:50:34 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:18.342 20:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:18.603 20:50:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:18.603 20:50:34 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:18.603 20:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.603 20:50:34 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:18.603 20:50:34 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gqb82FQNyj 00:37:18.603 20:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gqb82FQNyj 00:37:18.863 20:50:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0wdCNTwDvD 00:37:18.863 20:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0wdCNTwDvD 00:37:18.863 20:50:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.863 20:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.124 nvme0n1 00:37:19.124 20:50:35 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:19.124 20:50:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:19.385 20:50:35 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:19.385 "subsystems": [ 00:37:19.385 { 00:37:19.385 "subsystem": "keyring", 00:37:19.385 "config": [ 00:37:19.385 { 00:37:19.385 "method": "keyring_file_add_key", 00:37:19.385 "params": { 00:37:19.385 "name": "key0", 00:37:19.385 "path": "/tmp/tmp.Gqb82FQNyj" 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "keyring_file_add_key", 00:37:19.385 "params": { 00:37:19.385 "name": "key1", 00:37:19.385 "path": "/tmp/tmp.0wdCNTwDvD" 00:37:19.385 } 00:37:19.385 } 00:37:19.385 ] 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "subsystem": "iobuf", 00:37:19.385 "config": [ 00:37:19.385 { 00:37:19.385 "method": "iobuf_set_options", 00:37:19.385 "params": { 00:37:19.385 "small_pool_count": 8192, 00:37:19.385 "large_pool_count": 1024, 00:37:19.385 "small_bufsize": 8192, 00:37:19.385 "large_bufsize": 135168 00:37:19.385 } 00:37:19.385 } 00:37:19.385 ] 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "subsystem": "sock", 00:37:19.385 "config": [ 00:37:19.385 { 00:37:19.385 "method": "sock_impl_set_options", 00:37:19.385 "params": { 00:37:19.385 "impl_name": "posix", 00:37:19.385 "recv_buf_size": 2097152, 00:37:19.385 "send_buf_size": 2097152, 00:37:19.385 "enable_recv_pipe": true, 00:37:19.385 "enable_quickack": false, 00:37:19.385 "enable_placement_id": 0, 00:37:19.385 "enable_zerocopy_send_server": true, 00:37:19.385 "enable_zerocopy_send_client": false, 00:37:19.385 "zerocopy_threshold": 0, 00:37:19.385 "tls_version": 0, 00:37:19.385 "enable_ktls": false 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "sock_impl_set_options", 00:37:19.385 "params": { 00:37:19.385 "impl_name": "ssl", 00:37:19.385 "recv_buf_size": 4096, 00:37:19.385 "send_buf_size": 4096, 00:37:19.385 "enable_recv_pipe": true, 00:37:19.385 "enable_quickack": false, 00:37:19.385 "enable_placement_id": 0, 00:37:19.385 "enable_zerocopy_send_server": true, 00:37:19.385 "enable_zerocopy_send_client": false, 00:37:19.385 "zerocopy_threshold": 0, 00:37:19.385 "tls_version": 0, 00:37:19.385 "enable_ktls": false 00:37:19.385 } 00:37:19.385 } 00:37:19.385 ] 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "subsystem": "vmd", 00:37:19.385 "config": [] 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "subsystem": "accel", 00:37:19.385 "config": [ 00:37:19.385 { 00:37:19.385 "method": "accel_set_options", 00:37:19.385 "params": { 00:37:19.385 "small_cache_size": 128, 00:37:19.385 "large_cache_size": 16, 00:37:19.385 "task_count": 2048, 00:37:19.385 "sequence_count": 2048, 00:37:19.385 "buf_count": 2048 00:37:19.385 } 00:37:19.385 } 00:37:19.385 ] 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "subsystem": "bdev", 00:37:19.385 "config": [ 00:37:19.385 { 00:37:19.385 "method": "bdev_set_options", 00:37:19.385 "params": { 00:37:19.385 "bdev_io_pool_size": 65535, 00:37:19.385 "bdev_io_cache_size": 256, 00:37:19.385 "bdev_auto_examine": true, 00:37:19.385 "iobuf_small_cache_size": 128, 00:37:19.385 "iobuf_large_cache_size": 16 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "bdev_raid_set_options", 00:37:19.385 "params": { 00:37:19.385 "process_window_size_kb": 1024 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "bdev_iscsi_set_options", 00:37:19.385 "params": { 00:37:19.385 "timeout_sec": 30 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "bdev_nvme_set_options", 00:37:19.385 "params": { 00:37:19.385 "action_on_timeout": "none", 00:37:19.385 "timeout_us": 0, 00:37:19.385 "timeout_admin_us": 0, 00:37:19.385 "keep_alive_timeout_ms": 10000, 00:37:19.385 "arbitration_burst": 0, 00:37:19.385 "low_priority_weight": 0, 00:37:19.385 "medium_priority_weight": 0, 00:37:19.385 "high_priority_weight": 0, 00:37:19.385 "nvme_adminq_poll_period_us": 10000, 00:37:19.385 "nvme_ioq_poll_period_us": 0, 00:37:19.385 "io_queue_requests": 512, 00:37:19.385 "delay_cmd_submit": true, 00:37:19.385 "transport_retry_count": 4, 00:37:19.385 "bdev_retry_count": 3, 00:37:19.385 "transport_ack_timeout": 0, 00:37:19.385 "ctrlr_loss_timeout_sec": 0, 00:37:19.385 "reconnect_delay_sec": 0, 00:37:19.385 "fast_io_fail_timeout_sec": 0, 00:37:19.385 "disable_auto_failback": false, 00:37:19.385 "generate_uuids": false, 00:37:19.385 "transport_tos": 0, 00:37:19.385 "nvme_error_stat": false, 00:37:19.385 "rdma_srq_size": 0, 00:37:19.385 "io_path_stat": false, 00:37:19.385 "allow_accel_sequence": false, 00:37:19.385 "rdma_max_cq_size": 0, 00:37:19.385 "rdma_cm_event_timeout_ms": 0, 00:37:19.385 "dhchap_digests": [ 00:37:19.385 "sha256", 00:37:19.385 "sha384", 00:37:19.385 "sha512" 00:37:19.385 ], 00:37:19.385 "dhchap_dhgroups": [ 00:37:19.385 "null", 00:37:19.385 "ffdhe2048", 00:37:19.385 "ffdhe3072", 00:37:19.385 "ffdhe4096", 00:37:19.385 "ffdhe6144", 00:37:19.385 "ffdhe8192" 00:37:19.385 ] 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "bdev_nvme_attach_controller", 00:37:19.385 "params": { 00:37:19.385 "name": "nvme0", 00:37:19.385 "trtype": "TCP", 00:37:19.385 "adrfam": "IPv4", 00:37:19.385 "traddr": "127.0.0.1", 00:37:19.385 "trsvcid": "4420", 00:37:19.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.385 "prchk_reftag": false, 00:37:19.385 "prchk_guard": false, 00:37:19.385 "ctrlr_loss_timeout_sec": 0, 00:37:19.385 "reconnect_delay_sec": 0, 00:37:19.385 "fast_io_fail_timeout_sec": 0, 00:37:19.385 "psk": "key0", 00:37:19.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.385 "hdgst": false, 00:37:19.385 "ddgst": false 00:37:19.385 } 00:37:19.385 }, 00:37:19.385 { 00:37:19.385 "method": "bdev_nvme_set_hotplug", 00:37:19.385 "params": { 00:37:19.385 "period_us": 100000, 00:37:19.385 "enable": false 00:37:19.385 } 00:37:19.385 }, 00:37:19.386 { 00:37:19.386 "method": "bdev_wait_for_examine" 00:37:19.386 } 00:37:19.386 ] 00:37:19.386 }, 00:37:19.386 { 00:37:19.386 "subsystem": "nbd", 00:37:19.386 "config": [] 00:37:19.386 } 00:37:19.386 ] 00:37:19.386 }' 00:37:19.386 20:50:35 keyring_file -- keyring/file.sh@114 -- # killprocess 3351148 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3351148 ']' 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3351148 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3351148 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3351148' 00:37:19.386 killing process with pid 3351148 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@965 -- # kill 3351148 00:37:19.386 Received shutdown signal, test time was about 1.000000 seconds 00:37:19.386 00:37:19.386 Latency(us) 00:37:19.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.386 =================================================================================================================== 00:37:19.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:19.386 20:50:35 keyring_file -- common/autotest_common.sh@970 -- # wait 3351148 00:37:19.647 20:50:35 keyring_file -- keyring/file.sh@117 -- # bperfpid=3352957 00:37:19.647 20:50:35 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3352957 /var/tmp/bperf.sock 00:37:19.647 20:50:35 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3352957 ']' 00:37:19.647 20:50:35 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:19.647 20:50:35 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:19.647 20:50:35 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:19.647 20:50:35 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:19.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:19.647 20:50:35 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:19.647 20:50:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:19.647 20:50:35 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:19.647 "subsystems": [ 00:37:19.647 { 00:37:19.647 "subsystem": "keyring", 00:37:19.647 "config": [ 00:37:19.647 { 00:37:19.647 "method": "keyring_file_add_key", 00:37:19.647 "params": { 00:37:19.647 "name": "key0", 00:37:19.647 "path": "/tmp/tmp.Gqb82FQNyj" 00:37:19.647 } 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "method": "keyring_file_add_key", 00:37:19.647 "params": { 00:37:19.647 "name": "key1", 00:37:19.647 "path": "/tmp/tmp.0wdCNTwDvD" 00:37:19.647 } 00:37:19.647 } 00:37:19.647 ] 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "subsystem": "iobuf", 00:37:19.647 "config": [ 00:37:19.647 { 00:37:19.647 "method": "iobuf_set_options", 00:37:19.647 "params": { 00:37:19.647 "small_pool_count": 8192, 00:37:19.647 "large_pool_count": 1024, 00:37:19.647 "small_bufsize": 8192, 00:37:19.647 "large_bufsize": 135168 00:37:19.647 } 00:37:19.647 } 00:37:19.647 ] 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "subsystem": "sock", 00:37:19.647 "config": [ 00:37:19.647 { 00:37:19.647 "method": "sock_impl_set_options", 00:37:19.647 "params": { 00:37:19.647 "impl_name": "posix", 00:37:19.647 "recv_buf_size": 2097152, 00:37:19.647 "send_buf_size": 2097152, 00:37:19.647 "enable_recv_pipe": true, 00:37:19.647 "enable_quickack": false, 00:37:19.647 "enable_placement_id": 0, 00:37:19.647 "enable_zerocopy_send_server": true, 00:37:19.647 "enable_zerocopy_send_client": false, 00:37:19.647 "zerocopy_threshold": 0, 00:37:19.647 "tls_version": 0, 00:37:19.647 "enable_ktls": false 00:37:19.647 } 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "method": "sock_impl_set_options", 00:37:19.647 "params": { 00:37:19.647 "impl_name": "ssl", 00:37:19.647 "recv_buf_size": 4096, 00:37:19.647 "send_buf_size": 4096, 00:37:19.647 "enable_recv_pipe": true, 00:37:19.647 "enable_quickack": false, 00:37:19.647 "enable_placement_id": 0, 00:37:19.647 "enable_zerocopy_send_server": true, 00:37:19.647 "enable_zerocopy_send_client": false, 00:37:19.647 "zerocopy_threshold": 0, 00:37:19.647 "tls_version": 0, 00:37:19.647 "enable_ktls": false 00:37:19.647 } 00:37:19.647 } 00:37:19.647 ] 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "subsystem": "vmd", 00:37:19.647 "config": [] 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "subsystem": "accel", 00:37:19.647 "config": [ 00:37:19.647 { 00:37:19.647 "method": "accel_set_options", 00:37:19.647 "params": { 00:37:19.647 "small_cache_size": 128, 00:37:19.647 "large_cache_size": 16, 00:37:19.647 "task_count": 2048, 00:37:19.647 "sequence_count": 2048, 00:37:19.647 "buf_count": 2048 00:37:19.647 } 00:37:19.647 } 00:37:19.647 ] 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "subsystem": "bdev", 00:37:19.647 "config": [ 00:37:19.647 { 00:37:19.647 "method": "bdev_set_options", 00:37:19.647 "params": { 00:37:19.647 "bdev_io_pool_size": 65535, 00:37:19.647 "bdev_io_cache_size": 256, 00:37:19.647 "bdev_auto_examine": true, 00:37:19.647 "iobuf_small_cache_size": 128, 00:37:19.647 "iobuf_large_cache_size": 16 00:37:19.647 } 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "method": "bdev_raid_set_options", 00:37:19.647 "params": { 00:37:19.647 "process_window_size_kb": 1024 00:37:19.647 } 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "method": "bdev_iscsi_set_options", 00:37:19.647 "params": { 00:37:19.647 "timeout_sec": 30 00:37:19.647 } 00:37:19.647 }, 00:37:19.647 { 00:37:19.647 "method": "bdev_nvme_set_options", 00:37:19.647 "params": { 00:37:19.647 "action_on_timeout": "none", 00:37:19.647 "timeout_us": 0, 00:37:19.647 "timeout_admin_us": 0, 00:37:19.647 "keep_alive_timeout_ms": 10000, 00:37:19.647 "arbitration_burst": 0, 00:37:19.647 "low_priority_weight": 0, 00:37:19.647 "medium_priority_weight": 0, 00:37:19.647 "high_priority_weight": 0, 00:37:19.647 "nvme_adminq_poll_period_us": 10000, 00:37:19.647 "nvme_ioq_poll_period_us": 0, 00:37:19.647 "io_queue_requests": 512, 00:37:19.647 "delay_cmd_submit": true, 00:37:19.647 "transport_retry_count": 4, 00:37:19.647 "bdev_retry_count": 3, 00:37:19.647 "transport_ack_timeout": 0, 00:37:19.647 "ctrlr_loss_timeout_sec": 0, 00:37:19.647 "reconnect_delay_sec": 0, 00:37:19.647 "fast_io_fail_timeout_sec": 0, 00:37:19.647 "disable_auto_failback": false, 00:37:19.648 "generate_uuids": false, 00:37:19.648 "transport_tos": 0, 00:37:19.648 "nvme_error_stat": false, 00:37:19.648 "rdma_srq_size": 0, 00:37:19.648 "io_path_stat": false, 00:37:19.648 "allow_accel_sequence": false, 00:37:19.648 "rdma_max_cq_size": 0, 00:37:19.648 "rdma_cm_event_timeout_ms": 0, 00:37:19.648 "dhchap_digests": [ 00:37:19.648 "sha256", 00:37:19.648 "sha384", 00:37:19.648 "sha512" 00:37:19.648 ], 00:37:19.648 "dhchap_dhgroups": [ 00:37:19.648 "null", 00:37:19.648 "ffdhe2048", 00:37:19.648 "ffdhe3072", 00:37:19.648 "ffdhe4096", 00:37:19.648 "ffdhe6144", 00:37:19.648 "ffdhe8192" 00:37:19.648 ] 00:37:19.648 } 00:37:19.648 }, 00:37:19.648 { 00:37:19.648 "method": "bdev_nvme_attach_controller", 00:37:19.648 "params": { 00:37:19.648 "name": "nvme0", 00:37:19.648 "trtype": "TCP", 00:37:19.648 "adrfam": "IPv4", 00:37:19.648 "traddr": "127.0.0.1", 00:37:19.648 "trsvcid": "4420", 00:37:19.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.648 "prchk_reftag": false, 00:37:19.648 "prchk_guard": false, 00:37:19.648 "ctrlr_loss_timeout_sec": 0, 00:37:19.648 "reconnect_delay_sec": 0, 00:37:19.648 "fast_io_fail_timeout_sec": 0, 00:37:19.648 "psk": "key0", 00:37:19.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.648 "hdgst": false, 00:37:19.648 "ddgst": false 00:37:19.648 } 00:37:19.648 }, 00:37:19.648 { 00:37:19.648 "method": "bdev_nvme_set_hotplug", 00:37:19.648 "params": { 00:37:19.648 "period_us": 100000, 00:37:19.648 "enable": false 00:37:19.648 } 00:37:19.648 }, 00:37:19.648 { 00:37:19.648 "method": "bdev_wait_for_examine" 00:37:19.648 } 00:37:19.648 ] 00:37:19.648 }, 00:37:19.648 { 00:37:19.648 "subsystem": "nbd", 00:37:19.648 "config": [] 00:37:19.648 } 00:37:19.648 ] 00:37:19.648 }' 00:37:19.648 [2024-05-13 20:50:35.454280] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:37:19.648 [2024-05-13 20:50:35.454343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352957 ] 00:37:19.648 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.648 [2024-05-13 20:50:35.535436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.648 [2024-05-13 20:50:35.588862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.909 [2024-05-13 20:50:35.722332] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:20.480 20:50:36 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:20.480 20:50:36 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:20.480 20:50:36 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:20.480 20:50:36 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:20.480 20:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.480 20:50:36 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:20.480 20:50:36 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:20.480 20:50:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.480 20:50:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.480 20:50:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.480 20:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.480 20:50:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.741 20:50:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:20.741 20:50:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:20.741 20:50:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:20.741 20:50:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.741 20:50:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.741 20:50:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:20.741 20:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:21.003 20:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Gqb82FQNyj /tmp/tmp.0wdCNTwDvD 00:37:21.003 20:50:36 keyring_file -- keyring/file.sh@20 -- # killprocess 3352957 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3352957 ']' 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3352957 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3352957 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3352957' 00:37:21.003 killing process with pid 3352957 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@965 -- # kill 3352957 00:37:21.003 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.003 00:37:21.003 Latency(us) 00:37:21.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.003 =================================================================================================================== 00:37:21.003 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:21.003 20:50:36 keyring_file -- common/autotest_common.sh@970 -- # wait 3352957 00:37:21.263 20:50:37 keyring_file -- keyring/file.sh@21 -- # killprocess 3351130 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3351130 ']' 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3351130 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3351130 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3351130' 00:37:21.263 killing process with pid 3351130 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@965 -- # kill 3351130 00:37:21.263 [2024-05-13 20:50:37.089718] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:21.263 [2024-05-13 20:50:37.089757] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:21.263 20:50:37 keyring_file -- common/autotest_common.sh@970 -- # wait 3351130 00:37:21.524 00:37:21.524 real 0m11.144s 00:37:21.524 user 0m26.387s 00:37:21.524 sys 0m2.566s 00:37:21.524 20:50:37 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:21.524 20:50:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.524 ************************************ 00:37:21.524 END TEST keyring_file 00:37:21.524 ************************************ 00:37:21.524 20:50:37 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:37:21.524 20:50:37 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:37:21.524 20:50:37 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:37:21.524 20:50:37 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:37:21.524 20:50:37 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:37:21.524 20:50:37 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:37:21.524 20:50:37 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:37:21.524 20:50:37 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:37:21.524 20:50:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:21.524 20:50:37 -- common/autotest_common.sh@10 -- # set +x 00:37:21.524 20:50:37 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:37:21.524 20:50:37 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:21.524 20:50:37 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:21.524 20:50:37 -- common/autotest_common.sh@10 -- # set +x 00:37:29.667 INFO: APP EXITING 00:37:29.667 INFO: killing all VMs 00:37:29.667 INFO: killing vhost app 00:37:29.667 INFO: EXIT DONE 00:37:32.216 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:32.216 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:32.216 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:32.477 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:32.477 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:32.738 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:32.738 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:32.738 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:37.015 Cleaning 00:37:37.015 Removing: /var/run/dpdk/spdk0/config 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:37.015 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:37.015 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:37.015 Removing: /var/run/dpdk/spdk1/config 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:37.015 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:37.015 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:37.015 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:37.015 Removing: /var/run/dpdk/spdk2/config 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:37.015 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:37.015 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:37.015 Removing: /var/run/dpdk/spdk3/config 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:37.015 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:37.015 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:37.015 Removing: /var/run/dpdk/spdk4/config 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:37.015 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:37.015 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:37.015 Removing: /dev/shm/bdev_svc_trace.1 00:37:37.015 Removing: /dev/shm/nvmf_trace.0 00:37:37.015 Removing: /dev/shm/spdk_tgt_trace.pid2813796 00:37:37.015 Removing: /var/run/dpdk/spdk0 00:37:37.015 Removing: /var/run/dpdk/spdk1 00:37:37.015 Removing: /var/run/dpdk/spdk2 00:37:37.015 Removing: /var/run/dpdk/spdk3 00:37:37.015 Removing: /var/run/dpdk/spdk4 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2812308 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2813796 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2814628 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2815671 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2816013 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2817080 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2817271 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2817528 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2818659 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2819114 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2819503 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2819886 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2820289 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2820665 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2820837 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2821070 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2821456 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2822841 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2826097 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2826463 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2826830 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2827055 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2827529 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2827548 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2828017 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2828257 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2828617 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2828642 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2828995 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2829068 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2829652 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2829819 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2830191 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2830558 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2830588 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2830757 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2831002 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2831354 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2831703 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2832058 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2832248 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2832450 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2832797 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2833145 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2833494 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2833763 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2833953 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2834236 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2834591 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2834941 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2835286 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2835497 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2835703 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2836041 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2836388 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2836738 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2836808 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2837214 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2842052 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2944198 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2950468 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2962698 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2969595 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2974978 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2975793 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2994197 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2994605 00:37:37.015 Removing: /var/run/dpdk/spdk_pid2999998 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3007969 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3011045 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3024231 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3035936 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3037938 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3038959 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3061031 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3066537 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3072337 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3074302 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3076416 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3076645 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3076658 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3076724 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3077194 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3079390 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3080468 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3080848 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3083549 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3084250 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3084973 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3090541 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3097654 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3103408 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3150193 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3155269 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3163043 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3164548 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3166382 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3172038 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3177518 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3187313 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3187318 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3192709 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3193038 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3193372 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3193716 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3193815 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3195085 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3197082 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3199078 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3201081 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3203075 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3205045 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3213050 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3213868 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3214734 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3215746 00:37:37.015 Removing: /var/run/dpdk/spdk_pid3222147 00:37:37.275 Removing: /var/run/dpdk/spdk_pid3225428 00:37:37.275 Removing: /var/run/dpdk/spdk_pid3232247 00:37:37.275 Removing: /var/run/dpdk/spdk_pid3239122 00:37:37.275 Removing: /var/run/dpdk/spdk_pid3249287 00:37:37.275 Removing: /var/run/dpdk/spdk_pid3258687 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3258724 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3283307 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3283998 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3284681 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3285370 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3286424 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3287108 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3287888 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3288656 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3294209 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3294541 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3302163 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3302297 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3304957 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3313137 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3313142 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3319806 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3322164 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3324519 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3325955 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3328226 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3329744 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3340585 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3341103 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3341697 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3344688 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3345356 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3346021 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3351130 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3351148 00:37:37.276 Removing: /var/run/dpdk/spdk_pid3352957 00:37:37.276 Clean 00:37:37.276 20:50:53 -- common/autotest_common.sh@1447 -- # return 0 00:37:37.276 20:50:53 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:37:37.276 20:50:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.276 20:50:53 -- common/autotest_common.sh@10 -- # set +x 00:37:37.536 20:50:53 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:37:37.536 20:50:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.536 20:50:53 -- common/autotest_common.sh@10 -- # set +x 00:37:37.536 20:50:53 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:37.536 20:50:53 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:37.536 20:50:53 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:37.536 20:50:53 -- spdk/autotest.sh@389 -- # hash lcov 00:37:37.536 20:50:53 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:37.536 20:50:53 -- spdk/autotest.sh@391 -- # hostname 00:37:37.536 20:50:53 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:37.536 geninfo: WARNING: invalid characters removed from testname! 00:38:04.121 20:51:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.121 20:51:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.121 20:51:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:05.508 20:51:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.438 20:51:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:08.818 20:51:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:10.274 20:51:25 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:10.274 20:51:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:10.274 20:51:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:10.274 20:51:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.274 20:51:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.274 20:51:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.274 20:51:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.274 20:51:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.274 20:51:25 -- paths/export.sh@5 -- $ export PATH 00:38:10.274 20:51:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.274 20:51:25 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:10.274 20:51:25 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:10.274 20:51:25 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715626285.XXXXXX 00:38:10.274 20:51:25 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715626285.2SZZnQ 00:38:10.274 20:51:25 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:10.274 20:51:25 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:38:10.274 20:51:25 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:10.274 20:51:25 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:10.274 20:51:25 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:10.274 20:51:25 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:10.274 20:51:25 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:10.274 20:51:25 -- common/autotest_common.sh@10 -- $ set +x 00:38:10.274 20:51:25 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:38:10.274 20:51:25 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:10.274 20:51:25 -- pm/common@17 -- $ local monitor 00:38:10.274 20:51:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:10.275 20:51:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:10.275 20:51:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:10.275 20:51:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:10.275 20:51:25 -- pm/common@21 -- $ date +%s 00:38:10.275 20:51:25 -- pm/common@25 -- $ sleep 1 00:38:10.275 20:51:25 -- pm/common@21 -- $ date +%s 00:38:10.275 20:51:25 -- pm/common@21 -- $ date +%s 00:38:10.275 20:51:25 -- pm/common@21 -- $ date +%s 00:38:10.275 20:51:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715626285 00:38:10.275 20:51:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715626285 00:38:10.275 20:51:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715626285 00:38:10.275 20:51:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715626285 00:38:10.275 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715626285_collect-vmstat.pm.log 00:38:10.275 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715626285_collect-cpu-load.pm.log 00:38:10.275 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715626285_collect-cpu-temp.pm.log 00:38:10.275 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715626285_collect-bmc-pm.bmc.pm.log 00:38:11.217 20:51:26 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:11.217 20:51:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:38:11.217 20:51:26 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:11.217 20:51:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:11.217 20:51:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:11.217 20:51:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:11.217 20:51:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:11.217 20:51:26 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:11.217 20:51:26 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:11.217 20:51:26 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:11.217 20:51:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:11.217 20:51:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:11.217 20:51:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:11.217 20:51:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:11.217 20:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:11.217 20:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:11.217 20:51:27 -- pm/common@44 -- $ pid=3366410 00:38:11.217 20:51:27 -- pm/common@50 -- $ kill -TERM 3366410 00:38:11.217 20:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:11.217 20:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:11.217 20:51:27 -- pm/common@44 -- $ pid=3366411 00:38:11.217 20:51:27 -- pm/common@50 -- $ kill -TERM 3366411 00:38:11.217 20:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:11.217 20:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:11.217 20:51:27 -- pm/common@44 -- $ pid=3366413 00:38:11.217 20:51:27 -- pm/common@50 -- $ kill -TERM 3366413 00:38:11.217 20:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:11.217 20:51:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:11.217 20:51:27 -- pm/common@44 -- $ pid=3366441 00:38:11.217 20:51:27 -- pm/common@50 -- $ sudo -E kill -TERM 3366441 00:38:11.217 + [[ -n 2688289 ]] 00:38:11.217 + sudo kill 2688289 00:38:11.227 [Pipeline] } 00:38:11.243 [Pipeline] // stage 00:38:11.253 [Pipeline] } 00:38:11.269 [Pipeline] // timeout 00:38:11.274 [Pipeline] } 00:38:11.291 [Pipeline] // catchError 00:38:11.296 [Pipeline] } 00:38:11.313 [Pipeline] // wrap 00:38:11.318 [Pipeline] } 00:38:11.333 [Pipeline] // catchError 00:38:11.341 [Pipeline] stage 00:38:11.343 [Pipeline] { (Epilogue) 00:38:11.357 [Pipeline] catchError 00:38:11.359 [Pipeline] { 00:38:11.373 [Pipeline] echo 00:38:11.374 Cleanup processes 00:38:11.379 [Pipeline] sh 00:38:11.666 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:11.666 3366519 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:11.666 3366961 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:11.678 [Pipeline] sh 00:38:11.960 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:11.960 ++ grep -v 'sudo pgrep' 00:38:11.960 ++ awk '{print $1}' 00:38:11.960 + sudo kill -9 3366519 00:38:11.972 [Pipeline] sh 00:38:12.259 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:24.496 [Pipeline] sh 00:38:24.782 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:24.782 Artifacts sizes are good 00:38:24.797 [Pipeline] archiveArtifacts 00:38:24.804 Archiving artifacts 00:38:25.044 [Pipeline] sh 00:38:25.329 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:25.349 [Pipeline] cleanWs 00:38:25.358 [WS-CLEANUP] Deleting project workspace... 00:38:25.358 [WS-CLEANUP] Deferred wipeout is used... 00:38:25.365 [WS-CLEANUP] done 00:38:25.368 [Pipeline] } 00:38:25.389 [Pipeline] // catchError 00:38:25.403 [Pipeline] sh 00:38:25.688 + logger -p user.info -t JENKINS-CI 00:38:25.700 [Pipeline] } 00:38:25.715 [Pipeline] // stage 00:38:25.721 [Pipeline] } 00:38:25.737 [Pipeline] // node 00:38:25.742 [Pipeline] End of Pipeline 00:38:25.771 Finished: SUCCESS